Statistics
Maple contains a wealth of functionality for doing statistics, much of it in the Statistics package. Maple 17 has a new algorithm for fitting data in an overdetermined system for use in predictive models, as well as a new robust measure of dispersion that is more suitable for asymmetric distributions than the median absolute deviation.
Fitting Data for Use in a Predictive Model
Robust Measures of Dispersion
References
Maple 17 has a new algorithm for fitting data in an overdetermined system for use in predictive models.
Consider a survey of large-cap Energy sector stocks in the S&P 500, where you know a lot of information about a few listings. Assume in this case that most of the data, when adjusted against general market trends, ends up being useless for predicting future markets, but some properties do end up being strong market indicators. You do not, however, know which variables to keep and which not to.
Let's set up an example using random data to model this situation.
restart;
with⁡Statistics:
NumStocks:=41:
NumProperties:=100:
HiddenIndicators:=LinearAlgebra:-RandomMatrix⁡NumStocks,10,datatype=float8:
Data:=LinearAlgebra:-RandomMatrix⁡NumStocks,NumProperties,datatype=float8:
Data..,1..5:=HiddenIndicators..,1..5:
Profit:=Vector⁡NumStocks,i→add⁡1⁢HiddenIndicatorsi,jj,j=1..10:
At this point we have generated the known data, and have profit numbers for that data. The following chart takes one sample parameter, property 10, and plots it against the profit. It is not obvious what the correlation is.
BarChart⁡Data..,10,Profit
We want to build a model that fits all 100 properties against the data. A least-squares fit is used for comparison.
StandardLS:=LinearAlgebra:-LeastSquares⁡Data,Profit:
Because there are more variables than observations, we'll always get a perfect fit with 100% correlation to the existing data.
ProfitEstimatesUsingStandardLS:=Data.StandardLS:
Correlation⁡Profit,ProfitEstimatesUsingStandardLS
1.00000000000000
We don't really want a model that will perfectly fit today's data. We want a model that will detect which parameters are likely to actually correlate well with the profit observed and can be used in predicting new observations. We want to avoid overfitting.
plots[display]⁡plot⁡x,x=−150..150,ScatterPlot⁡Profit,ProfitEstimatesUsingStandardLS
Let's set up a new model using predictive least-squares.
SignificantProperties,PredictLS:=Statistics:-PredictiveLeastSquares⁡Data,Profit:
SignificantProperties
1,2,3,4,7,8
Note that the PredictiveLeastSquares command, new in Maple 17, returns two things: the coefficient matrix (as with regular least-squares) as well as a list indicating the columns of data that it decided were most relevant for use in predicting data.
ProfitEstimatesUsingPredictLS:=Data..,SignificantProperties.PredictLS:
Correlation⁡Profit,ProfitEstimatesUsingPredictLS
0.947961956927974
The correlation with the current profit data is not perfect (as expected) but still very good.
plots[display]⁡plot⁡x,x=−150..150,ScatterPlot⁡Profit,ProfitEstimatesUsingPredictLS
The following graph shows the standard least-squares (red), predictive least-squares (green), and actual numbers on the same plot.
perm,sortedProfit:=sort⁡Profit,output='permutation','sorted':
pts:=seq⁡i,i=1..NumStocks:
plotsdisplayScatterPlot⁡pts,sortedProfit,ScatterPlot⁡pts,ProfitEstimatesUsingPredictLSperm,color=green,symbolsize=15,ScatterPlot⁡pts,ProfitEstimatesUsingStandardLSperm,color=red,symbolsize=15
These commands also illustrate another new feature of Maple 17: the output option for sort. It can be used to obtain the reordering applied to the input (in this case, Profit) in order to sort it. This reordering (in this case, perm) can then, for example, be applied to other lists or Vectors, as is the case here. There are more examples on the help page for sort.
Now we will generate some further data based again on the same hidden indicators, of which only some actually appear as items in the data that we know about.
NewHiddenIndicators:=LinearAlgebra:-RandomMatrix⁡NumStocks,10,datatype=float8:
NewData:=LinearAlgebra:-RandomMatrix⁡NumStocks,NumProperties,datatype=float8:
NewData..,1..5:=NewHiddenIndicators..,1..5:
NewProfit:=Vector⁡NumStocks,i→add⁡1⁢NewHiddenIndicatorsi,jj,j=1..10:
Let's first use the standard least-squares model against the new data.
NewProfitEstimatesUsingStandardLS:=NewData.StandardLS:
Correlation⁡NewProfit,NewProfitEstimatesUsingStandardLS
0.710550589496143
It is clear that the model suffers from overfitting against spurious parameters in the original data set.
NewProfitEstimatesUsingPredictLS:=NewData..,SignificantProperties.PredictLS:
Correlation⁡NewProfit,NewProfitEstimatesUsingPredictLS
0.948260572403935
The predictive model achieved a much better correlation when using new data.
perm2,sortedProfit2:=sort⁡NewProfit,output='permutation','sorted':
plots[display]⁡ScatterPlot⁡pts,sortedProfit2,ScatterPlot⁡pts,NewProfitEstimatesUsingPredictLSperm2,color=green,symbolsize=15,ScatterPlot⁡pts,NewProfitEstimatesUsingStandardLSperm2,color=red,symbolsize=15
It is clear that the predictive least-squares data (green) more closely matches the actual points in black, whereas the standard least-squares data (red), which generally correlates, is much more scattered.
A measure of dispersion is a statistic of a data set that describes the variability or spread of that data set. Two well-known examples are the standard deviation and the interquartile range. Maple 17 introduces a new measure of dispersion called Sn, originally proposed by Rousseeuw and Croux [1].
Let us investigate how measures of dispersion behave when noise is added to a data set. Specifically, we will have an original data set X of, say, n data points, and a perturbed data set Y where a certain fraction r⁢n of the data points are changed dramatically. We investigate at what value of r the values become meaningless.
X:=Sample⁡Normal⁡0,1,1000
StandardDeviation⁡X
1.04421364916374
Y:=copy⁡X
Y1:=10100
Y1:=10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
StandardDeviation⁡Y
3.16227766016838⁢1098
For the standard deviation, we see that changing only one data point can massively change the standard deviation. In other words, there is no positive fraction r of the data points that we can change while keeping the standard deviation bounded. We say that the breakdown point of the standard deviation is 0.
For the interquartile range, the process is different. Changing a single data point doesn't make the interquartile range of Y change very much; in fact, we can change up to a quarter of the data points while staying within an order of magnitude from the interquartile range of X. As soon as we have changed 250 out of the 1000 data points, though, the interquartile range also goes through the roof.
InterquartileRange⁡X
1.38339270130420
Y1..249:=10100:
InterquartileRange⁡Y
4.05253073684634
Y250:=10100:
5.83333333333371⁢1099
This suggests that the breakdown point of the interquartile range is 14: changing strictly fewer than 14 of the points cannot make the interquartile range unbounded. This is indeed correct. We say that the interquartile range is more robust than the standard deviation.
The breakdown point for any statistic can never be more than 12: if we change over half of the data points in the set, then there's no way to decide what the "correct" data is, and what the "changed" data is. So are there dispersion statistics that reach this maximal breakdown point?
Yes, there are. A relatively well-known one is the median absolute deviation from the median, available in Maple as MedianDeviation. As the name says, it is obtained by computing the absolute difference between every data point and the median of the data set, and taking the median of these values.
MedianDeviation⁡X
0.683845406516676
Y:=copy⁡X:
Y1..499:=10100:
MedianDeviation⁡Y
6.52664152292415
Y500:=10100:
The median absolute deviation from the median is a very useful robust estimator, but it also has some disadvantages, explained in the paper [1] by Rousseeuw and Croux. One of their objections is that it doesn't deal with asymmetric distributions very well, and another is that, while it is very robust against extreme changes in some points, it needs relatively many data points to "converge" to the proper value for a distribution in the absence of disturbance. In the statistics literature, this is phrased as saying that the median absolute deviation from the median is not very efficient. These authors propose two alternative statistics that also have a breakdown point of 12 but higher efficiency, called Sn and Qn. Maple 17 has an implementation of Sn, which is called RousseeuwCrouxSn. This is a new command for Maple 17.
RousseeuwCrouxSn⁡X
0.868120336664191
RousseeuwCrouxSn⁡Y
7.17500133596595
1.00000000000000⁢10100
We will show how all of these statistics deviate from their true value for beta-distributed data samples at sample sizes from 10 to 10000 and with fractions between 0 and 12 of the data replaced by the value 5. In particular, given the sample size and the fraction r, we replace the highest 100⁢r percent of the data by 5, then divide value obtained for the changed sample by the true value for the distribution, thus obtaining a number that should be 1 for an ideal statistic. We then repeat this 100 times, and take the average squared difference from 1. This is the number shown in the plot below for each of the four measures of dispersion discussed above.
functions:=StandardDeviation,InterquartileRange,MedianDeviation,RousseeuwCrouxSn:
X:=Sample⁡BetaDistribution⁡1,2,106:
true_values:=map⁡f→f⁡X,functions
true_values:=0.235596218498922,0.364943301288678,0.176447401282199,0.212276454518238
sample_sizes:=10,30,100,300,1000,3000,10000:
nss:=numelems⁡sample_sizes:
results:=Array⁡1..4,1..nss,0..10,1..100
for k to 100 do X ≔ SampleBetaDistribution1, 2, maxsample_sizes; for i to nss do Y ≔ X1 .. sample_sizesi; sortinplaceY, `>`: for j from 0 to 10 do Y1 .. ceilj * sample_sizesi / 20 ≔ 5; for f to numelemsfunctions do resultsf, i, j, k ≔ functionsfY / true_valuesf; end do; end do; end do:end do:
rr:=Array⁡1..4,1..nss,0..10:
for i to nss do for j from 0 to 10 do for f to 4 do rrf,i,j ≔ Momentresultsf, i, j, 2, origin = 1; end do: end do:end do:
plots:-displayplots:-surfdata~seqconvertrri, Matrix, i=1 .. 4, 1 .. nss, 0 .. 0.5, color =~ red, green, blue, yellow, transparency = 0.2, axis1=tickmarks=seqi = sample_sizesi, i = 1 .. nss, axis3=mode=log, view=DEFAULT,DEFAULT, minrr .. 10, orientation=116, −68, 177, labels=`Sample sizes`, r, Variance, labeldirections=horizontal, horizontal, vertical;
The colors are red for the standard deviation, green for the interquartile range, blue for the median absolute deviation from the median, and yellow for Rousseeuw and Croux' Sn. Lower numbers are shown higher in the graph, and are better. We see that in the case where there is no distortion (r=0), the standard deviation has the lowest distortion. However, as soon as there is any distortion, it is immediately too inaccurate to be useful for any purpose. For r<0.25, the interquartile range (green) does reasonably well, but greater values of r make it, too, unusable. The median absolute deviation from the median (in blue) and Sn (yellow) are pretty close in most cases, with yellow coming out on top slightly more often than blue. We see that Sn is a good choice for a robust statistic of dispersion.
Rousseeuw, Peter J., and Croux, Christophe. Alternatives to the Median Absolute Deviation. Journal of the American Statistical Association 88(424), 1993, pp.1273-1283.
See Also
InterquartileRange
MedianDeviation
PredictiveLeastSquares
RousseeuwCrouxSn
sort
StandardDeviation
Download Help Document