官术网_书友最值得收藏!

Hypothesis

X denotes the input variables, also called input features, and y denotes the output or target variable that we are trying to predict. The pair (x, y) is called a training example, and the dataset used to learn is a list of m training examples, where {(x, y)} is a training set. We will also use X to denote the space of input values, and Y to denote the space of output values. For a training set, to learn a function, h: X → Y so that h(x) is a predictor for the value of y. Function h is called a hypothesis.

When the target variable to be predicted is continuous, we call the learning problem a regression problem. When y can take a small number of discrete values, we call it a classification problem.

Let's say we choose to approximate y as a linear function of x.

The hypothesis function is as follows:

In this last hypothesis function, the θi 's are parameters, also known as weights, which parameterize the space of linear functions mapping from X to Y. To simplify the notation, we also introduce the convention of letting x0 = 1 (this is the intercept term), such that:

On the RHS, we view θ and x both as vectors, and n is the number of input variables.

Now before we proceed any further, it's important to note that we will now be transitioning from mathematical fundamentals to learning algorithms. Optimizing the cost function and learning θ will lay the foundation to understand machine learning algorithms.

Given a training set, how do we learn the parameters θ? One method that looks possible is to get h(x) close to y for the given training examples. We shall define a function that measures, for each value of the θs, how close the h(x(i))s are to the corresponding y (i) s. We define this as a cost function.

主站蜘蛛池模板: 阳山县| 镇康县| 资阳市| 邢台市| 利辛县| 四川省| 雷山县| 什邡市| 凭祥市| 陆丰市| 肇庆市| 惠水县| 梁山县| 德惠市| 韩城市| 乐山市| 夏津县| 靖宇县| 普安县| 宜阳县| 和林格尔县| 历史| 绥中县| 盐池县| 璧山县| 射阳县| 北安市| 福贡县| 固镇县| 泾阳县| 宿迁市| 浏阳市| 定西市| 蒙城县| 岳阳市| 仙桃市| 手游| 吉林省| 临邑县| 柘城县| 壶关县|