모델링 1
- 회귀 트리, XGBRegressor와 LGBMRegressor를 혼합한 모델
- 두 트리의 최적 하이퍼 파라미터 튜닝
In [ ]:
X_train_1 = X_train.copy()
y_train_1 = y_train.copy()
In [ ]:
xgb_reg = XGBRegressor(n_estimators=1000, learning_rate=0.05, colsample_bytree=0.5, subsample=0.8)
lgbm_reg = LGBMRegressor(n_estimators=1000, learning_rate=0.05, num_leaves=4, subsample=0.6,
colsample_bytree=0.4, reg_lambda=10, n_jobs=-1)
xgb_reg.fit(X_train_1, y_train_1)
lgbm_reg.fit(X_train_1, y_train_1)
xgb_pred = xgb_reg.predict(X_test)
lgbm_pred = lgbm_reg.predict(X_test)
pred_1 = 0.5 * xgb_pred + 0.5 * lgbm_pred
In [ ]:
pred_1
Out[ ]:
array([1.16312296, 0.63567925, 0.99099988, ..., 0.76380604, 1.20200264,
0.83372951])
In [ ]:
submission = pd.read_csv('/content/sample_submission.csv')
submission['풍속 (m/s)'] = pred_1
submission.head()
Out[ ]:
ID | 풍속 (m/s) | |
---|---|---|
0 | TEST_00000 | 1.163123 |
1 | TEST_00001 | 0.635679 |
2 | TEST_00002 | 0.991000 |
3 | TEST_00003 | 0.815738 |
4 | TEST_00004 | 0.925762 |
In [ ]:
submission.to_csv('submission_1.csv', index= False)
score: 1.0976854456
모델링 2
- 단일 모델: LinearRegression 선형 회귀
- 선형 회귀의 경우, 중요 범주형 피처들을 원-핫 인코딩으로 변환하는 것은 성능에 중요한 영향을 미칠 수 있다
In [ ]:
X_train_2 = X_train.copy()
y_train_2 = y_train.copy()
In [ ]:
lr = LinearRegression()
lr.fit(X_train_2, y_train_2)
pred_2 = lr.predict(X_test)
In [ ]:
pred_2
Out[ ]:
array([ -80.79434694, -169.15894966, -269.98753967, ..., -206.21635544,
-72.72459816, -120.01379312])
In [ ]:
submission = pd.read_csv('/content/sample_submission.csv')
submission['풍속 (m/s)'] = pred_2
submission.head()
Out[ ]:
ID | 풍속 (m/s) | |
---|---|---|
0 | TEST_00000 | -80.794347 |
1 | TEST_00001 | -169.158950 |
2 | TEST_00002 | -269.987540 |
3 | TEST_00003 | -41.664867 |
4 | TEST_00004 | -101.675632 |
In [ ]:
submission.to_csv('submission_2.csv', index= False)
순위 더 떨어짐
모델링 3
- Autogluon 이용
In [ ]:
X_train_3 = X_train.copy()
y_train_3 = y_train.copy()
In [ ]:
y_train_3.head(3)
Out[ ]:
0 0.959350
1 0.985817
2 0.548121
Name: 풍속 (m/s), dtype: float64
In [ ]:
!pip install autogluon
Collecting autogluon
Downloading autogluon-0.8.2-py3-none-any.whl (9.7 kB)
ERROR: Operation cancelled by user
In [ ]:
from autogluon.tabular import TabularDataset, TabularPredictor
import autogluon.core as ag
In [ ]:
train_3 = pd.concat([X_train_3, y_train_3], axis=1)
In [ ]:
train_3.head(3)
Out[ ]:
월 | 일 | 측정 시간대 | 섭씨 온도(°C) | 절대 온도(K) | 이슬점 온도(°C) | 상대 습도 (%) | 대기압(mbar) | 포화 증기압(mbar) | 실제 증기압(mbar) | 증기압 부족량(mbar) | 수증기 함량 (g/kg) | 공기 밀도 (g/m**3) | 풍향 (deg) | 풍속 (m/s) | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 7 | 2 | 3 | 13.97 | 287.78 | 9.84 | 76.1 | 992.08 | 2.832036 | 12.16 | 1.572774 | 7.66 | 1198.06 | 155.6 | 0.959350 |
1 | 8 | 21 | 1 | 16.94 | 290.85 | 12.14 | 73.3 | 991.07 | 3.012098 | 14.17 | 1.818077 | 8.94 | 1183.67 | 177.0 | 0.985817 |
2 | 11 | 1 | 3 | 9.76 | 283.84 | 5.40 | 74.2 | 988.71 | 2.572612 | 8.98 | 1.415853 | 5.67 | 1213.22 | 146.2 | 0.548121 |
In [ ]:
train_3 = TabularDataset(train_3)
X_test = TabularDataset(X_test)
In [ ]:
predictor = TabularPredictor(label='풍속 (m/s)',
problem_type='regression',
eval_metric='mae').fit(train_3)
No path specified. Models will be saved in: "AutogluonModels/ag-20230727_044102/"
Beginning AutoGluon training ...
AutoGluon will save models to "AutogluonModels/ag-20230727_044102/"
AutoGluon Version: 0.8.2
Python Version: 3.10.6
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Fri Jun 9 10:57:30 UTC 2023
Disk Space Avail: 83.38 GB / 115.66 GB (72.1%)
Train Data Rows: 36581
Train Data Columns: 14
Label Column: 풍속 (m/s)
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 11037.04 MB
Train Data (Original) Memory Usage: 4.1 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 11 | ['섭씨 온도(°\u2063C)', '절대 온도(K)', '이슬점 온도(°C)', '상대 습도 (%)', '대기압(mbar)', ...]
('int', []) : 3 | ['월', '일', '측정 시간대']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 11 | ['섭씨 온도(°\u2063C)', '절대 온도(K)', '이슬점 온도(°C)', '상대 습도 (%)', '대기압(mbar)', ...]
('int', []) : 3 | ['월', '일', '측정 시간대']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 4.1 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.17s ...
AutoGluon will gauge predictive performance using evaluation metric: 'mean_absolute_error'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.06834148875099096, Train Rows: 34081, Val Rows: 2500
User-specified model hyperparameters to be fit:
{
'NN_TORCH': {},
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': {},
'XGB': {},
'FASTAI': {},
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif ...
-0.1978 = Validation score (-mean_absolute_error)
1.65s = Training runtime
0.07s = Validation runtime
Fitting model: KNeighborsDist ...
-0.1882 = Validation score (-mean_absolute_error)
0.11s = Training runtime
0.07s = Validation runtime
Fitting model: LightGBMXT ...
[1000] valid_set's l1: 0.185561
[2000] valid_set's l1: 0.169956
[3000] valid_set's l1: 0.161669
[4000] valid_set's l1: 0.156781
[5000] valid_set's l1: 0.153135
[6000] valid_set's l1: 0.150523
[7000] valid_set's l1: 0.148405
[8000] valid_set's l1: 0.146782
[9000] valid_set's l1: 0.145277
[10000] valid_set's l1: 0.144252
-0.1443 = Validation score (-mean_absolute_error)
23.19s = Training runtime
3.47s = Validation runtime
Fitting model: LightGBM ...
[1000] valid_set's l1: 0.165738
[2000] valid_set's l1: 0.153494
[3000] valid_set's l1: 0.148446
[4000] valid_set's l1: 0.146113
[5000] valid_set's l1: 0.144918
[6000] valid_set's l1: 0.144045
[7000] valid_set's l1: 0.143417
[8000] valid_set's l1: 0.142826
[9000] valid_set's l1: 0.142451
[10000] valid_set's l1: 0.142113
-0.1421 = Validation score (-mean_absolute_error)
22.77s = Training runtime
1.8s = Validation runtime
Fitting model: RandomForestMSE ...
-0.1474 = Validation score (-mean_absolute_error)
82.35s = Training runtime
0.26s = Validation runtime
Fitting model: CatBoost ...
-0.1488 = Validation score (-mean_absolute_error)
198.42s = Training runtime
0.02s = Validation runtime
Fitting model: ExtraTreesMSE ...
-0.1369 = Validation score (-mean_absolute_error)
17.56s = Training runtime
0.23s = Validation runtime
Fitting model: NeuralNetFastAI ...
-0.192 = Validation score (-mean_absolute_error)
43.3s = Training runtime
0.03s = Validation runtime
Fitting model: XGBoost ...
-0.1462 = Validation score (-mean_absolute_error)
31.06s = Training runtime
2.02s = Validation runtime
Fitting model: NeuralNetTorch ...
-0.1612 = Validation score (-mean_absolute_error)
303.16s = Training runtime
0.02s = Validation runtime
Fitting model: LightGBMLarge ...
[1000] valid_set's l1: 0.149135
[2000] valid_set's l1: 0.142361
[3000] valid_set's l1: 0.140528
[4000] valid_set's l1: 0.139625
[5000] valid_set's l1: 0.139307
[6000] valid_set's l1: 0.139291
[7000] valid_set's l1: 0.139238
[8000] valid_set's l1: 0.139199
[9000] valid_set's l1: 0.139244
[10000] valid_set's l1: 0.139256
-0.1392 = Validation score (-mean_absolute_error)
50.14s = Training runtime
3.15s = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
-0.1339 = Validation score (-mean_absolute_error)
0.35s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 803.86s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20230727_044102/")
In [ ]:
results = predictor.fit_summary()
*** Summary of fit() ***
Estimated performance of each model:
model score_val pred_time_val fit_time pred_time_val_marginal fit_time_marginal stack_level can_infer fit_order
0 WeightedEnsemble_L2 -0.133898 10.692065 448.231814 0.000633 0.354889 2 True 12
1 ExtraTreesMSE -0.136855 0.228944 17.556133 0.228944 17.556133 1 True 7
2 LightGBMLarge -0.139178 3.153922 50.139374 3.153922 50.139374 1 True 11
3 LightGBM -0.142110 1.797391 22.767510 1.797391 22.767510 1 True 4
4 LightGBMXT -0.144250 3.472089 23.193653 3.472089 23.193653 1 True 3
5 XGBoost -0.146177 2.021983 31.058183 2.021983 31.058183 1 True 9
6 RandomForestMSE -0.147434 0.260848 82.345464 0.260848 82.345464 1 True 5
7 CatBoost -0.148752 0.018180 198.415354 0.018180 198.415354 1 True 6
8 NeuralNetTorch -0.161236 0.017104 303.162073 0.017104 303.162073 1 True 10
9 KNeighborsDist -0.188244 0.065346 0.114947 0.065346 0.114947 1 True 2
10 NeuralNetFastAI -0.192004 0.029302 43.302716 0.029302 43.302716 1 True 8
11 KNeighborsUnif -0.197770 0.068729 1.651721 0.068729 1.651721 1 True 1
Number of models trained: 12
Types of models trained:
{'CatBoostModel', 'NNFastAiTabularModel', 'WeightedEnsembleModel', 'XTModel', 'KNNModel', 'LGBModel', 'TabularNeuralNetTorchModel', 'RFModel', 'XGBoostModel'}
Bagging used: False
Multi-layer stack-ensembling used: False
Feature Metadata (Processed):
(raw dtype, special dtypes):
('float', []) : 11 | ['섭씨 온도(°\u2063C)', '절대 온도(K)', '이슬점 온도(°C)', '상대 습도 (%)', '대기압(mbar)', ...]
('int', []) : 3 | ['월', '일', '측정 시간대']
Plot summary of models saved to file: AutogluonModels/ag-20230727_044102/SummaryOfModels.html
*** End of fit() summary ***
In [ ]:
model_to_use = predictor.get_model_best()
model_to_use
Out[ ]:
'WeightedEnsemble_L2'
In [ ]:
model_to_use = predictor.get_model_best()
pred_3 = predictor.predict(X_test, model=model_to_use)
In [ ]:
pred_3
Out[ ]:
0 1.104405
1 0.666788
2 0.987389
3 0.856508
4 0.946609
...
15673 1.534070
15674 1.046430
15675 0.792045
15676 1.166205
15677 0.713134
Name: 풍속 (m/s), Length: 15678, dtype: float32
In [ ]:
submission = pd.read_csv('/content/sample_submission.csv')
submission['풍속 (m/s)'] = pred_3
submission.head()
Out[ ]:
ID | 풍속 (m/s) | |
---|---|---|
0 | TEST_00000 | 1.104405 |
1 | TEST_00001 | 0.666788 |
2 | TEST_00002 | 0.987389 |
3 | TEST_00003 | 0.856508 |
4 | TEST_00004 | 0.946609 |
In [ ]:
submission.to_csv('submission_3.csv', index= False)
순위는 동일, 점수는 조금 더 오름
score: 1.078565696
모델링 4
- 전처리 하기 전의 데이터로 AutoGluon 다시 수행
In [ ]:
train_4 = train.copy()
In [ ]:
train_4 = train_4.drop('ID', axis=1)
In [ ]:
train_4 = TabularDataset(train_4)
X_test = TabularDataset(X_test)
In [ ]:
predictor = TabularPredictor(label='풍속 (m/s)',
problem_type='regression',
eval_metric='mae').fit(train_4)
No path specified. Models will be saved in: "AutogluonModels/ag-20230727_052242/"
Beginning AutoGluon training ...
AutoGluon will save models to "AutogluonModels/ag-20230727_052242/"
AutoGluon Version: 0.8.2
Python Version: 3.10.6
Operating System: Linux
Platform Machine: x86_64
Platform Version: #1 SMP Fri Jun 9 10:57:30 UTC 2023
Disk Space Avail: 80.75 GB / 115.66 GB (69.8%)
Train Data Rows: 36581
Train Data Columns: 14
Label Column: 풍속 (m/s)
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 11856.0 MB
Train Data (Original) Memory Usage: 4.1 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 11 | ['섭씨 온도(°\u2063C)', '절대 온도(K)', '이슬점 온도(°C)', '상대 습도 (%)', '대기압(mbar)', ...]
('int', []) : 3 | ['월', '일', '측정 시간대']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 11 | ['섭씨 온도(°\u2063C)', '절대 온도(K)', '이슬점 온도(°C)', '상대 습도 (%)', '대기압(mbar)', ...]
('int', []) : 3 | ['월', '일', '측정 시간대']
0.1s = Fit runtime
14 features in original data used to generate 14 features in processed data.
Train Data (Processed) Memory Usage: 4.1 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.13s ...
AutoGluon will gauge predictive performance using evaluation metric: 'mean_absolute_error'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.06834148875099096, Train Rows: 34081, Val Rows: 2500
User-specified model hyperparameters to be fit:
{
'NN_TORCH': {},
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': {},
'XGB': {},
'FASTAI': {},
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif ...
-0.1978 = Validation score (-mean_absolute_error)
1.42s = Training runtime
0.05s = Validation runtime
Fitting model: KNeighborsDist ...
-0.1882 = Validation score (-mean_absolute_error)
0.07s = Training runtime
0.05s = Validation runtime
Fitting model: LightGBMXT ...
[1000] valid_set's l1: 0.185561
[2000] valid_set's l1: 0.169956
[3000] valid_set's l1: 0.161669
[4000] valid_set's l1: 0.156781
[5000] valid_set's l1: 0.153135
[6000] valid_set's l1: 0.150523
[7000] valid_set's l1: 0.148405
[8000] valid_set's l1: 0.146782
[9000] valid_set's l1: 0.145277
[10000] valid_set's l1: 0.144252
-0.1443 = Validation score (-mean_absolute_error)
26.92s = Training runtime
3.29s = Validation runtime
Fitting model: LightGBM ...
[1000] valid_set's l1: 0.165738
[2000] valid_set's l1: 0.153494
[3000] valid_set's l1: 0.148446
[4000] valid_set's l1: 0.146113
[5000] valid_set's l1: 0.144918
[6000] valid_set's l1: 0.144045
[7000] valid_set's l1: 0.143417
[8000] valid_set's l1: 0.142826
[9000] valid_set's l1: 0.142451
[10000] valid_set's l1: 0.142113
-0.1421 = Validation score (-mean_absolute_error)
21.92s = Training runtime
1.61s = Validation runtime
Fitting model: RandomForestMSE ...
-0.1474 = Validation score (-mean_absolute_error)
81.2s = Training runtime
0.26s = Validation runtime
Fitting model: CatBoost ...
-0.1488 = Validation score (-mean_absolute_error)
185.05s = Training runtime
0.02s = Validation runtime
Fitting model: ExtraTreesMSE ...
-0.1369 = Validation score (-mean_absolute_error)
17.4s = Training runtime
0.23s = Validation runtime
Fitting model: NeuralNetFastAI ...
-0.192 = Validation score (-mean_absolute_error)
38.44s = Training runtime
0.04s = Validation runtime
Fitting model: XGBoost ...
-0.1462 = Validation score (-mean_absolute_error)
31.98s = Training runtime
0.8s = Validation runtime
Fitting model: NeuralNetTorch ...
-0.1612 = Validation score (-mean_absolute_error)
315.6s = Training runtime
0.02s = Validation runtime
Fitting model: LightGBMLarge ...
[1000] valid_set's l1: 0.149135
[2000] valid_set's l1: 0.142361
[3000] valid_set's l1: 0.140528
[4000] valid_set's l1: 0.139625
[5000] valid_set's l1: 0.139307
[6000] valid_set's l1: 0.139291
[7000] valid_set's l1: 0.139238
[8000] valid_set's l1: 0.139199
[9000] valid_set's l1: 0.139244
[10000] valid_set's l1: 0.139256
-0.1392 = Validation score (-mean_absolute_error)
52.04s = Training runtime
3.15s = Validation runtime
Fitting model: WeightedEnsemble_L2 ...
-0.1339 = Validation score (-mean_absolute_error)
0.49s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 799.9s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20230727_052242/")
In [ ]:
model_to_use = predictor.get_model_best()
model_to_use
Out[ ]:
'WeightedEnsemble_L2'
In [ ]:
model_to_use = predictor.get_model_best()
pred_4 = predictor.predict(X_test, model=model_to_use)
In [ ]:
pred_4
Out[ ]:
0 1.104405
1 0.666788
2 0.987389
3 0.856508
4 0.946609
...
15673 1.534070
15674 1.046430
15675 0.792045
15676 1.166205
15677 0.713134
Name: 풍속 (m/s), Length: 15678, dtype: float32
In [ ]:
submission = pd.read_csv('/content/sample_submission.csv')
submission['풍속 (m/s)'] = pred_4
submission.head()
Out[ ]:
ID | 풍속 (m/s) | |
---|---|---|
0 | TEST_00000 | 1.104405 |
1 | TEST_00001 | 0.666788 |
2 | TEST_00002 | 0.987389 |
3 | TEST_00003 | 0.856508 |
4 | TEST_00004 | 0.946609 |
In [ ]:
submission.to_csv('submission_4.csv', index= False)
전처리 한 후 AutoGluon을 적용한 경우와 점수 동일함
'autogluon을 돌릴거면 굳이 전처리를 하지 않아도 된다' 정도만 배워가는 듯함
score: 1.078565696
'Data Science > Dacon' 카테고리의 다른 글
[회귀] 데이콘 Basic 풍속 예측 AI 경진대회 (1) 데이터 로딩/탐색적 데이터분석/전처리(Data Loading/EDA/Preprocessing) (0) | 2023.08.02 |
---|---|
[회귀] 자동차 가격 예측 AI 경진대회 (2) - AutoML 라이브러리 AutoGluon (0) | 2023.07.03 |
[회귀] 자동차 가격 예측 AI 경진대회 (1) - 데이터 불러오기, 간단 EDA (0) | 2023.07.03 |
[회귀] 감귤 착과량 예측 AI 경진대회 (0) | 2023.06.14 |
[분류] 유전체 정보 품종 분류 AI 경진대회 (2) - AutoML을 이용한 모델링 3가지 (0) | 2023.06.13 |