- Learning Quantitative Finance with R
- Dr. Param Jeet Prashant Vats
- 379字
- 2021-07-09 19:06:53
Parameter estimates
In this section, we are going to discuss some of the algorithms used for parameter estimation.
Maximum likelihood estimation
Maximum likelihood estimation (MLE) is a method for estimating model parameters on a given dataset.
Now let us try to find the parameter estimates of a probability density function of normal distribution.
Let us first generate a series of random variables, which can be done by executing the following code:
> set.seed(100) > NO_values <- 100 > Y <- rnorm(NO_values, mean = 5, sd = 1) > mean(Y)
This gives 5.002913
.
> sd(Y)
This gives 1.02071
.
Now let us make a function for log
likelihood:
LogL <- function(mu, sigma) { + A = dnorm(Y, mu, sigma) + -sum(log(A)) + }
Now let us apply the function mle
to estimate the parameters for estimating mean and standard deviation:
> library(stats4) > mle(LogL, start = list(mu = 2, sigma=2))
mu
and sigma
have been given initial values.
This gives the output as follows:
Figure 2.13: Output for MLE estimation
NaNs are produced when negative values are attempted for the standard deviation.
This can be controlled by giving relevant options, as shown here. This ignores the warning messages produced in the output displayed in Figure 2.13:
> mle(LogL, start = list(mu = 2, sigma=2), method = "L-BFGS-B", + lower = c(-Inf, 0), + upper = c(Inf, Inf))
This, upon execution, gives the best possible fit, as shown here:
Figure 2.14: Revised output for MLE estimation
Linear model
In the linear regression model, we try to predict dependent/response variables in terms of independent/predictor variables. In the linear model, we try to fit the best possible line, known as the regression line, though the given points. The coefficients for the regression lines are estimated using statistical software. An intercept in the regression line represents the mean value of the dependent variable when the predictor variable takes the value as zero. Also the response variable increases by the factor of estimated coefficients for each unit change in the predictor variable. Now let us try to estimate parameters for the linear regression model where the dependent variable is Adj.Close
and independent variable is Volume
of Sampledata
. Then we can fit the linear model as follows:
> Y<-Sampledata$Adj.Close > X<-Sampledata$Volume > fit <- lm(Y ~ X) > summary(fit)
Upon executing the preceding code, the output is generated as given here:
Figure 2.15: Output for linear model estimation
The summary
display shows the parameter estimates of the linear regression model. Similarly, we can estimate parameters for other regression models such as multiple or other forms of regression models.
- Word 2003、Excel 2003、PowerPoint 2003上機指導與練習
- 網絡綜合布線技術
- iClone 4.31 3D Animation Beginner's Guide
- 永磁同步電動機變頻調速系統及其控制(第2版)
- 計算機網絡原理與技術
- Kubernetes for Serverless Applications
- Machine Learning with the Elastic Stack
- Mastering GitLab 12
- 所羅門的密碼
- 筆記本電腦電路分析與故障診斷
- 三菱FX/Q系列PLC工程實例詳解
- JSP網絡開發入門與實踐
- Mastering SQL Server 2014 Data Mining
- Web滲透技術及實戰案例解析
- 這樣用Word!