官术网_书友最值得收藏!

Using decision trees

We can import the DecisionTreeClassifier class and create a Decision Tree using scikit-learn:

from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier(random_state=14)

We used 14 for our random_state again and will do so for most of the book. Using the same random seed allows for replication of experiments. However, with your experiments, you should mix up the random state to ensure that the algorithm's performance is not tied to the specific value.

We now need to extract the dataset from our pandas data frame in order to use it with our scikit-learn classifier. We do this by specifying the columns we wish to use and using the values parameter of a view of the data frame. The following code creates a dataset using our last win values for both the home team and the visitor team:

X_previouswins = dataset[["HomeLastWin", "VisitorLastWin"]].values

Decision trees are estimators, as introduced in Chapter 2, Classifying using scikit-learn Estimators, and therefore have fit and predict methods. We can also use the cross_val_score method to get the average score (as we did previously):

from sklearn.cross_validation import cross_val_score
import numpy as np
scores = cross_val_score(clf, X_previouswins, y_true,
scoring='accuracy')
print("Accuracy: {0:.1f}%".format(np.mean(scores) * 100))

This scores 59.4 percent: we are better than choosing randomly! However, we aren't beating our other baseline of just choosing the home team. In fact, we are pretty much exactly the same. We should be able to do better. Feature engineering is one of the most difficult tasks in data mining, and choosing good features is key to getting good outcomes—more so than choosing the right algorithm!

主站蜘蛛池模板: 望奎县| 托克逊县| 天气| 云浮市| 江华| 武冈市| 利津县| 清镇市| 左贡县| 含山县| 海原县| 平武县| 四会市| 庄浪县| 惠安县| 贵阳市| 油尖旺区| 泰来县| 临江市| 德保县| 安康市| 宾川县| 仁怀市| 汨罗市| 博罗县| 安庆市| 郑州市| 丹江口市| 潢川县| 武陟县| 化隆| 华蓥市| 罗山县| 宁武县| 贵定县| 安徽省| 曲靖市| 二连浩特市| 南陵县| 吴旗县| 溆浦县|