Here's what you have to do:
(1) Use Scikit learn wine dataset (sklearn.datasets.load_wine())and plot the data.
(2) plot the data of logistic regression and randomforest models.
Above I posted examples of what the code looks like. Feel freeto copy the code to make ti work with the dataset for the task
THIS IS FOR PYTHON OR COLAB NOTEBOOK
Logistic Regression With Iris Data [] from sklearn.linear_model import LogisticRegression clf = LogisticRegression (random_state=0).fit(X_train, y_train) clf.predict(X_test) clf.predict_proba(x_test) clf.score(X_test, y_test) 0.96 import sklearn.metrics as metrics predicted = clf.predict(X_test) print("Classification report:") print(metrics.classification_report(y_test, predicted)) Accura Classification report: precision recall f1-score support 1 2 1.00 1.00 0.79 1.00 0.83 1.00 1.00 0.91 0.88 16 18 11 micro avg macro avg weighted avg 0.93 0.93 0.95 0.93 0.94 0.93 0.93 0.93 0.93 45 45 45 Random Forest With Digits data [] from sklearn.ensemble import RandomForestClassifier classifier = RandomForestclassifier() X_train, x_test, y_train, y_test = train_test_split(data, digits.target, test_size=0.5, shuffle=False) classifier = classifier.fit(X_train, y_train) [] predicted = classifier.predict(X_test) print(classifier.score (X_test,y_test)) 0.8765294771968855 [] - axes = plt.subplots(1, 4) images_and_predictions = list(zip(digits.images[n_samples // 2:], predicted)) for ax, (image, prediction) in zip(axes, images_and_predictions [:4]): ax.set_axis_off() ax.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title('Prediction: %i' % prediction) Prediction: 8 Prediction: 6 Prediction: 4 Prediction: 9