Autoregressive Moving Average (ARMA) Definition
 Accounting, Taxation, and Reporting

Law, Transactions, & Risk Management
Government, Legal System, Administrative Law, & Constitutional Law Legal Disputes  Civil & Criminal Law Agency Law HR, Employment, Labor, & Discrimination Business Entities, Corporate Governance & Ownership Business Transactions, Antitrust, & Securities Law Real Estate, Personal, & Intellectual Property Commercial Law: Contract, Payments, Security Interests, & Bankruptcy Consumer Protection Insurance & Risk Management Immigration Law Environmental Protection Law Inheritance, Estates, and Trusts
 Marketing, Advertising, Sales & PR
 Business Management & Operations
 Economics, Finance, & Analytics
 Professionalism & Career Development
 Courses
Back to: RESEARCH, ANALYSIS, & DECISION SCIENCE
Autoregressive Moving Average (ARMA) Explained
ARMA is a model of forecasting in which the methods of autoregression (AR) analysis and moving average (MA) are both applied to timeseries data that is well behaved. In ARMA it is assumed that the time series is stationary and when it fluctuates, it does so uniformly around a particular time.
A Little More on the ARMA Model
The ARMA angle, developed by Box and Jenkins (1970) using the time series analysis method, fails to consider the part played by explanatory variables based on economic or financial theory and instead opts to use the extrapolation mechanism, in the description of the time series, based on the changing law of the time series itself. The reason for developing a time series model is that the time series is stationary. ARMA is essential in studying a time series. It is usually utilized in market research for longterm tracking data research. For example, it is used in retail research, to analyze sales volume which has seasonal variation characteristics. This model is among the highresolution spectral analysis methods of the model parameter method, which is used in studying the rational spectrum of the stationary stochastic processes and is suited for a large class of practical problems. ARMA has a better and more accurate spectral estimation and resolution performance when compared to the AR or MA model, but it has a cumbersome parameter estimation.
Overview of the ARMA model
If the input sequence {u(n)} and the output sequence {a(n)} of the model can be measured, then an estimation of the model parameters can be done using the least squares method. The model parameters can be sufficient since this estimation is a linear one. The output {x(n)} is the only sequence of the model that can be obtained in many spectral estimates. During this time, it is difficult to get an accurate estimation of the ARMA model parameters because the parameter estimation is nonlinear. There is an introduction, theoretically, of some optimal estimation methods for ARMA model parameters although they have a disadvantage of computational complexity that is large and also the inability of guaranteeing convergence. The suboptimal method, which is estimated AR and MA parameters, is therefore proposed in engineering. However, the AR and MA parameters are not estimated simultaneously like in the optimal parameter estimation to reduce the amount of calculation significantly.
Fundamental of ARMA model
The prediction index forms a data sequence over time that is known as a random sequence. Their dependence reflects the continuity of the original data in time. On one side there is the influence of influencing factors, and on the other, there is its law of change. This is while assuming that the influencing factors by regression analysis are x1, x2... xk.
AR Model
AR model is commonly used in current spectrum estimation. The following is the procedure for using this model.
 Selecting the AR model and then equalizing its output to equal the signal being studied if the input is an impulse function or white noise. It should at least be a good approximation of the signal.
 Finding the model's parameters number using the known autocorrelation function or data.
 Using the derived model parameters to estimate the power spectrum of the signal.
MA Model
It is a commonly used model in modern spectrum estimation and is also one of the methods of model parametric spectrum analysis. The procedure for estimating the MA model's signal spectrum is as follows.
 Selecting the MA model and then equalizing the output to equal the signal under study in the case where the input is an impulse function or white noise. It should be at least a good approximation of the signal.
 Finding the model's parameters using the known autocorrelation function.
 Estimating the signal's power spectrum using the derived model parameters.
In the estimation of the ARMA parameter spectrum, the AR parameters are first estimated, and then the MA parameters are estimated based on these AR parameters. The spectral estimates of the ARMA model are then obtained. The parameter estimation of the MA model is therefore often calculated as a process of ARMA parameter spectrum association. It is used in mechanical parts like gears to form fault diagnosis and analysis since it can process separate sinusoidal signal frequencies.
References for Autoregressive Moving Average
 https://en.wikipedia.org/wiki/Autoregressive%E2%80%93movingaverage_modelhttp://www.businessdictionary.com/definition/ARMAmodel.html
 https://www.quantstart.com/articles/AutoregressiveMovingAverageARMApqModelsforTimeSeriesAnalysisPart1Aspect
Academic Research for Autoregressive Moving Average
Diagnostic checking ARMA time series models using squaredresidual autocorrelations,McLeod, A. I., & Li, W. K. (1983). Journal of Time Series Analysis, 4(4), 269273. This paper reports the results of a simulation experiment that confirms the smallsample validity of the proposed tests. They show that the normalized squaredresidual autocorrelations are asymptotically unit multivariate normal. Longrange forecasting of IBM product revenues using a seasonal fractionally differenced ARMA model, Ray, B. K. (1993). International Journal of Forecasting, 9(2), 255269. This study utilizes a series of monthly IBM product revenues to show the usefulness of seasonal fractionally differenced ARMA models for business forecasting. The estimation and application of long memory time series models, Geweke, J., & PorterHudak, S. (1983). Journal of time series analysis, 4(4), 221238. This paper presents a generalization of the definitions of fractional Gaussian noise and integrated series. It also illustrates that these two concepts are equivalent. Maximum likelihood fitting of ARMA models to time series with missing observations,Jones, R. H. (1980). Technometrics, 22(3), 389395. This article reviews the method used to calculate the exact probability function of a stationary ARMA time series based on a Markovian representation by Akaike and using Kalman recursive estimation. Analysis of an adaptive timeseries autoregressive movingaverage (ARMA) model for shortterm load forecasting, Chen, J. F., Wang, W. M., & Huang, C. M. (1995). Electric Power Systems Research, 34(3), 187196. This paper presents the development of an adaptive ARMA model for shortterm load forecasting of a power system. An introduction to longmemory time series models and fractional differencing, Granger, C. W., & Joyeux, R. (1980). Journal of time series analysis, 1(1), 1529. This article introduces the idea of fractional differencing in terms of the infinite filter corresponding to the expansion of (1B)d. Quasimaximum likelihood estimation and inference in dynamic models with timevarying covariances, Bollerslev, T., & Wooldridge, J. M. (1992). Econometric Reviews, 11(2), 143172. This paper attempts to study the properties of the QuasiMaximum Likelihood Estimator and related test statistics in dynamic models that together parameterize conditional means and covariances when there is the maximization of average loglikelihood and the violation of the assumption of normality. The estimation of the order of an ARMA process, Hannan, E. J. (1980). The Annals of Statistics, 10711081. This paper establishes a strong consistency of individual estimates of the maximum lags of an autoregressive moving average process. It also proves a theory on weak consistency and evaluates the probability of overestimation of a maximum lag in cases where consistency does not hold. Improved methods of combining forecasts, Granger, C. W., & Ramanathan, R. (1984). Journal of Forecasting, 3(2), 197204. In this paper, three alternative approaches to obtaining linear combinations are considered. It also shows that the best method is adding a constant term and not constrain the weights to add to unity.