[Last Update: Oct 27, 2017]

## The Time Value of Money

### Time Value of Money Concepts and Applications

• compound interest or interest on interest
• future value
• present value

### Using a Financial Calculator

• learn to solve the TVM (time value of money) problems

### Time Lines

• discounting/compounding
• cash flows occur at the end of the period depicted on the time line
• required rate of return / discount rate / opportunity cost
• noimal risk-free rate = real risk-free rate + expected inflation rate
• required interest rate on a security = noimal risk-free rate + default risk premium + liquidity premium + maturity risk premium

### Annuities

• ordinary annuity

## Basic Statistics

### Measure of Central Tendency

#### LO 16.2: Calculate the mean, standard deviation, and variance of a discrete random variable.

• population mean / sample mean

### Covariance and Correlation

#### LO 16.5: Calculate and interpret the covariance and correlation between two random variables.

correlation measures the strength of the linear relationship between two random variables

### Moments and Central Moments

• skewness (standardized third central moment): evaluate whether the distribution is symmetric
• kurtosis (standardized fourth central moment): refers to the degree of peakedness or clustering in the data distribution

### Skewness and Kurtosis

#### LO 16.7: Interpret the skewness and kurtosis of a statistical distribution, and interpret the concepts of coskewness and cokurtosis.

• skewness

• positively/right skewed distributions: mode < median < mean (for unimodal distribution), positively means more outliner in the positive region

• negatively/left skewed distributions: mode > median > mode (for unimodal distribution) • kurtosis

• leptokurtic (more peaked than a normal distribution)
• great probability of an observed value being either close to the mean or far from the mean
• platykurtic (more flatter)
• mesokurtic (same kurtosis)
• excess kurtosis: kurtosis - 3 (relative to a normal distribution)

### Coskewness and Cokurtosis

• coskewness: third cross central moment
• cokurtosis: fourth cross central moment

• unbiased
• efficient
• consistent
• linear

## Hypothesis Testing and Confidence Intervals

### Hypothesis Testing

#### LO 19.3: Construct an appropriate null and alternative hypothesis, and calculate an appropriate test statistic. ### Type I and Type II Errors

• Type I: rejection of the null hypothesis when it is actually true
• Type II: the failure to reject null hypothesis when it is actually false

### The Power of a Test

• power: 1 - P(Type II Errors)

### The p-Value

• p-value: the probability of obtaining a test statistics that would lead to rejection of the null hypothesis

### Chebyshev’s Inequality

• the percentage of the observations that lie within k std of the mean is at least $1- 1/k^{2}$

## Linear Regression with One Regressor

### Properties of OLS Estimators

#### LO 20.7: Summarize the benefits of using OLS estimators.

• widely used in practice
• easily understood across multiple fields
• unbiased, consistent, and (under special conditions) efficient

### OLS Regression Results

#### LO 20.9: Interpret the explained sum of squares, the total sum of squares, the residual sum of squares, the standard error of the regression, and the regression $R^{2}$.

• the coefficient of determination (interpreted as a percentage of variation in the dependent variable explained by the indepedent variable)
• total sum of squares = explained sum of squares + sum of squared residuals (TSS = ESS + SSR)
• $\sum(Y{i} - \bar{Y})^{2} = \sum(\hat{Y} - \bar{Y})^{2} + \sum(Y{i} - \hat{Y})^{2}$
• $R^{2} = \frac{ESS}{TSS}$
• difference between “correlation coeficient” and “coefficient of determination” (P137)

## Regression with a Single Regressor: Hypothesis Tests and Confidence Intervals

### Regression Coefficient Hypothesis Testing

#### LO 21.2: Interpret the p-value

• p-value: the smallest level of significance for which the null hypothesis can be rejected

### What is Heteroskedasticity?

#### LO 21.4: Evaluate the implications of homoskedasticity and heteroskedasticity.

• homoskedastic (variance of residuals is constant across all observations in the sample)
• heteroskedasticity (opposite of homoskedastic)
• unconditional heteroskedasticity (usually causes no major problems with regression)
• conditional heteroskedasticity (does create significant problems for statistical inference)

### Detecting Heteroskedasticity ## Linear Regression with Multiple Regressors

### Adjusted $R^{2}$

• adjusted $R^{2} = 1 - \frac{n-1}{n-k-1} (1 - R^{2})$ is less than or equal to $R^{2}$

### Detecting Multicollinearity

• high p-value, high $R^{2}$

## Hypothesis Tests and Confidence Intervals in Multiple Regression

### The F-Statistic

• F-statistic: always a one-tailed test, calculated as $\frac{ESS/k}{SSR/(n-k-1)}$

## Modeling and Forecasting Trend

### The $s^{2}$ measure

• $s^{2}$ measure (unbiased estimate of the MSE): $s^{2} = \frac{\sum{t=1}^{T} e{t}^{2}}{T-k} = \frac{T}{T-k}\frac{\sum{t=1}^{T} e{t}^{2}}{T}$
• adjusted $R^{2}$ using the $s^{2}$ estimate

### Akaike and Schwarz Criterion

• Akaike information criterion: $e^{\frac{2k}{T}} \frac{\sum{t=1}^{T} e{t}^{2}}{T}$
• Schwarz information criterion: $T^{\frac{k}{T}} \frac{\sum{t=1}^{T}e{t}^{2}}{T}$

### Evaluating Consistency

#### LO 24.4: Explain the necessary conditions for a model selection criterion to demonstrate consistency

• the most consistent selection criteria with the greatest penalty factor for degrees of freedom is the SIC.

## Modeling and Forecasting Seasonality

### Sources of Seasonality

#### LO 25.1: Describe the sources of seasonality and how to deal with it in time series analysis.

• sources of seasonality
• approaches
• using a seasonally adjusted time series
• regression analysis with seasonal dummy variables

## Characterizing Cycles

### Estimating the Mean and Autocorrelation Functions

#### LO 26.11: Describe sample partial autocorrelation.

• sample autocorrelation: estimate the degree to which white noise characterizes a series of data
• sample partial autocorrelation: determine whether a time series exhibits white noise
• q-statistics: measure the degree to which autocorrelations vary from zero and whether white noise is present in a dataset
• Box-Pierce q-statistic: reflects the absolute magnitudes of the correlations
• Ljung-Box q-statistic: similar to the above measure

## Correlations and Copulas

### Copulas

#### LO 29.6: Define copula and describe the key properties of copulas and copula correlation

• copula: creates a joint probability distribution between two or more variables while maintaining their individual marginal distributions.
• copula correlation

## Simulation Methods

### Reducing Monte Carlo Sampling Error

#### LO 30.2: Describe ways to reduce Monte Carlo sampling error.

• increase replications
• antithetic variates
• control variates

### Resulting Sets of Random Numbers

#### LO 30.5: Describe the benefits of reusing sets of random number draws across Monte Carlo experiments and how to reuse them.

• Dickey-Fuller (DF) test: used to determine whether a time series is covariance stationary
• Different Experiments: