Id stringlengths 1 6 | PostTypeId stringclasses 7
values | AcceptedAnswerId stringlengths 1 6 ⌀ | ParentId stringlengths 1 6 ⌀ | Score stringlengths 1 4 | ViewCount stringlengths 1 7 ⌀ | Body stringlengths 0 38.7k | Title stringlengths 15 150 ⌀ | ContentLicense stringclasses 3
values | FavoriteCount stringclasses 3
values | CreationDate stringlengths 23 23 | LastActivityDate stringlengths 23 23 | LastEditDate stringlengths 23 23 ⌀ | LastEditorUserId stringlengths 1 6 ⌀ | OwnerUserId stringlengths 1 6 ⌀ | Tags list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
113 | 1 | null | null | 11 | 484 | I have been looking into theoretical frameworks for method selection (note: not model selection) and have found very little systematic, mathematically-motivated work. By 'method selection', I mean a framework for distinguishing the appropriate (or better, optimal) method with respect to a problem, or problem type.
What... | What are some good frameworks for method selection? | CC BY-SA 2.5 | null | 2010-07-19T20:54:23.200 | 2010-10-08T23:57:02.170 | 2010-07-21T15:44:07.450 | 39 | 39 | [
"machine-learning",
"methodology",
"mathematical-statistics"
] |
114 | 1 | null | null | 35 | 1667 | What statistical research blogs would you recommend, and why?
| What statistical blogs would you recommend? | CC BY-SA 3.0 | null | 2010-07-19T21:00:53.077 | 2016-10-13T15:18:29.730 | 2016-10-13T15:18:29.730 | 28666 | 8 | [
"references"
] |
115 | 2 | null | 103 | 1 | null | [We Love Datavis](http://datavis.tumblr.com/), a data visualization tumblog.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:01:35.757 | 2012-10-24T15:02:27.243 | 2012-10-24T15:02:27.243 | 615 | 127 | null |
116 | 2 | null | 114 | 12 | null | Cosma Shalizi's [blog](http://www.cscs.umich.edu/~crshalizi/weblog/), often talks about statistics, and is always interesting.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:04:16.080 | 2010-07-19T21:04:16.080 | null | null | 72 | null |
117 | 2 | null | 114 | 25 | null | [http://www.r-bloggers.com/](http://www.r-bloggers.com/) is an aggregated blog from lots of blogs that talk about statistics using R, and the [#rstats](http://search.twitter.com/search?q=%23rstats) hashtag on twitter is also helpful. I write quite a bit about [statistics and R in genetics research](http://gettinggeneti... | null | CC BY-SA 2.5 | null | 2010-07-19T21:04:24.283 | 2010-07-20T15:13:37.983 | 2010-07-20T15:13:37.983 | 36 | 36 | null |
118 | 1 | 151 | null | 548 | 307338 | In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute value of the difference instead and get the expected value (mean) of those, and wouldn't that also show the variation of the... | Why square the difference instead of taking the absolute value in standard deviation? | CC BY-SA 3.0 | null | 2010-07-19T21:04:39.057 | 2022-11-23T10:16:14.803 | 2021-01-19T20:58:21.960 | 919 | 83 | [
"standard-deviation",
"definition",
"absolute-value",
"faq"
] |
119 | 2 | null | 118 | 9 | null | There are many reasons; probably the main is that it works well as parameter of normal distribution.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:11:44.797 | 2013-04-27T14:09:42.487 | 2013-04-27T14:09:42.487 | null | null | null |
120 | 2 | null | 118 | 98 | null | One way you can think of this is that standard deviation is similar to a "distance from the mean".
Compare this to distances in euclidean space - this gives you the true distance, where what you suggested (which, btw, is the [absolute deviation](http://en.wikipedia.org/wiki/Average_absolute_deviation)) is more like a... | null | CC BY-SA 2.5 | null | 2010-07-19T21:14:07.983 | 2010-07-19T21:14:07.983 | null | null | 41 | null |
121 | 2 | null | 118 | 155 | null | The squared difference has nicer mathematical properties; it's continuously differentiable (nice when you want to minimize it), it's a sufficient statistic for the Gaussian distribution, and it's (a version of) the L2 norm which comes in handy for proving convergence and so on.
The mean absolute deviation (the absolute... | null | CC BY-SA 2.5 | null | 2010-07-19T21:14:25.407 | 2010-07-19T21:14:25.407 | null | null | 61 | null |
123 | 2 | null | 118 | 21 | null | Squaring the difference from the mean has a couple of reasons.
- Variance is defined as the 2nd moment of the deviation (the R.V here is $(x-\mu)$) and thus the square as moments are simply the expectations of higher powers of the random variable.
- Having a square as opposed to the absolute value function gives a ni... | null | CC BY-SA 3.0 | null | 2010-07-19T21:15:20.917 | 2017-04-20T00:53:18.180 | 2017-04-20T00:53:18.180 | 5176 | 130 | null |
124 | 1 | null | null | 33 | 2018 | I'm a programmer without statistical background, and I'm currently looking at different classification methods for a large number of different documents that I want to classify into pre-defined categories. I've been reading about kNN, SVM and NN. However, I have some trouble getting started. What resources do you recom... | Statistical classification of text | CC BY-SA 2.5 | null | 2010-07-19T21:17:30.543 | 2018-12-30T19:39:56.940 | 2010-07-21T22:17:00.927 | null | 131 | [
"classification",
"information-retrieval",
"text-mining"
] |
125 | 1 | null | null | 245 | 148505 | Which is the best introductory textbook for Bayesian statistics?
One book per answer, please.
| What is the best introductory Bayesian statistics textbook? | CC BY-SA 2.5 | null | 2010-07-19T21:18:12.713 | 2021-10-19T15:45:27.030 | 2012-01-22T20:18:28.350 | null | 5 | [
"bayesian",
"references"
] |
126 | 2 | null | 125 | 65 | null | My favorite is ["Bayesian Data Analysis"](http://www.stat.columbia.edu/~gelman/book/) by Gelman, et al. (The pdf version is legally free since April 2020!)
| null | CC BY-SA 4.0 | null | 2010-07-19T21:19:43.570 | 2020-04-06T16:52:41.577 | 2020-04-06T16:52:41.577 | 53690 | 5 | null |
127 | 2 | null | 125 | 31 | null | Another vote for Gelman et al., but a close second for me -- being of the learn-by-doing persuasion -- is Jim Albert's ["Bayesian Computation with R"](http://www-math.bgsu.edu/~albert/bcwr/).
| null | CC BY-SA 4.0 | null | 2010-07-19T21:23:20.593 | 2019-04-02T12:06:42.873 | 2019-04-02T12:06:42.873 | 53690 | 61 | null |
128 | 1 | 191 | null | 14 | 33131 | In Plain English, how does one interpret a Bland-Altman plot?
What are the advantages of using a Bland-Altman plot over other methods of comparing two different measurement methods?
| How does one interpret a Bland-Altman plot? | CC BY-SA 2.5 | null | 2010-07-19T21:23:57.973 | 2020-04-02T17:50:20.670 | 2016-07-13T08:05:10.397 | 1352 | 132 | [
"data-visualization",
"bland-altman-plot"
] |
129 | 2 | null | 125 | 8 | null | I quite like [Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference](http://rads.stackoverflow.com/amzn/click/1584885874) by Gamerman and Lopes.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:24:58.567 | 2010-10-05T13:56:15.550 | 2010-10-05T13:56:15.550 | 8 | 8 | null |
130 | 1 | 131 | null | 41 | 13563 | I had a plan of learning R in the near future. Reading [another question](https://stats.stackexchange.com/questions/3/what-are-some-valuable-statistical-analysis-open-source-projects) I found out about Clojure. Now I don't know what to do.
I think a big advantage of R for me is that some people in Economics use it, inc... | Clojure versus R: advantages and disadvantages for data analysis | CC BY-SA 2.5 | null | 2010-07-19T21:26:27.023 | 2023-01-06T21:49:20.187 | 2017-04-13T12:44:27.570 | -1 | 90 | [
"r"
] |
131 | 2 | null | 130 | 27 | null | Let me start by saying that I love both languages: you can't go wrong with either, and they are certainly better than something like C++ or Java for doing data analysis.
For basic data analysis I would suggest R (especially with plyr). IMO, R is a little easier to learn than Clojure, although this isn't completely obv... | null | CC BY-SA 4.0 | null | 2010-07-19T21:28:41.907 | 2023-01-06T21:49:20.187 | 2023-01-06T21:49:20.187 | 30155 | 5 | null |
132 | 2 | null | 125 | 11 | null | Coming from non-statistical background I found [Introduction to Applied Bayesian Statistics and Estimation for Social Scientists](http://rads.stackoverflow.com/amzn/click/038771264X) quite informative and easy to follow.
| null | CC BY-SA 3.0 | null | 2010-07-19T21:29:37.040 | 2017-02-10T17:57:46.607 | 2017-02-10T17:57:46.607 | 12080 | 22 | null |
133 | 2 | null | 4 | -1 | null | I don't know how to use SAS/R/Orange, but it sounds like the kind of test you need is a [chi-square test](http://en.wikipedia.org/wiki/Chi-square_test).
| null | CC BY-SA 2.5 | null | 2010-07-19T21:31:53.813 | 2010-07-19T21:31:53.813 | null | null | 139 | null |
134 | 1 | 3449 | null | 23 | 22773 | On smaller window sizes, `n log n` sorting might work. Are there any better algorithms to achieve this?
| Algorithms to compute the running median? | CC BY-SA 2.5 | null | 2010-07-19T21:32:38.523 | 2021-08-19T04:28:21.460 | 2010-08-03T12:14:50.543 | 8 | 138 | [
"algorithms",
"median"
] |
135 | 2 | null | 4 | 18 | null | I believe that this calls for a [two-sample Kolmogorov–Smirnov test](http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ks2samp.htm), or the like. The two-sample Kolmogorov–Smirnov test is based on comparing differences in the [empirical distribution functions](http://en.wikipedia.org/wiki/Empirical_dis... | null | CC BY-SA 2.5 | null | 2010-07-19T21:36:12.850 | 2010-07-19T21:52:08.617 | 2010-07-19T21:52:08.617 | 39 | 39 | null |
137 | 2 | null | 124 | 20 | null | I recommend these books - they are highly rated on Amazon too:
"Text Mining" by Weiss
"Text Mining Application Programming", by Konchady
For software, I recommend RapidMiner (with the text plugin), free and open-source.
This is my "text mining process":
- collect the documents (usually a web crawl)
[sample if too la... | null | CC BY-SA 3.0 | null | 2010-07-19T21:38:09.370 | 2017-07-21T07:15:29.190 | 2017-07-21T07:15:29.190 | 166832 | 74 | null |
138 | 1 | 1213 | null | 78 | 48150 | I'm interested in learning [R](http://en.wikipedia.org/wiki/R_%28programming_language%29) on the cheap. What's the best free resource/book/tutorial for learning R?
| Free resources for learning R | CC BY-SA 3.0 | null | 2010-07-19T21:38:10.290 | 2016-02-08T17:30:40.050 | 2016-02-08T16:52:47.047 | 28666 | 142 | [
"r",
"references"
] |
139 | 2 | null | 138 | 24 | null | If I had to choose one thing, make sure that you read ["The R Inferno"](http://www.burns-stat.com/pages/Tutor/R_inferno.pdf).
There are many good resources on [the R homepage](http://www.r-project.org), but in particular, read ["An Introduction to R"](http://cran.r-project.org/doc/manuals/R-intro.pdf) and ["The R Langu... | null | CC BY-SA 2.5 | null | 2010-07-19T21:39:17.220 | 2010-07-19T21:39:17.220 | null | null | 5 | null |
140 | 2 | null | 138 | 8 | null | The official guides are pretty nice; check out [http://cran.r-project.org/manuals.html](http://cran.r-project.org/manuals.html) . There is also a lot of contributed documentation there.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:39:35.690 | 2010-07-19T21:39:35.690 | null | null | null | null |
141 | 2 | null | 103 | 2 | null | Light-hearted: [Indexed](http://thisisindexed.com/)
Also, see older visualizations from the same creator at the original [Indexed Blog](http://indexed.blogspot.com/).
| null | CC BY-SA 3.0 | null | 2010-07-19T21:40:02.540 | 2012-10-24T14:58:17.090 | 2012-10-24T14:58:17.090 | 615 | 142 | null |
142 | 2 | null | 138 | 6 | null | After you learn the basics, I find the following sites very useful:
- R-bloggers.
- Subscribing to the Stack overflow R tag.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:42:57.987 | 2010-07-19T21:42:57.987 | 2017-05-23T12:39:26.523 | -1 | 8 | null |
143 | 2 | null | 124 | 5 | null | Neural network may be to slow for a large number of documents (also this is now pretty much obsolete).
And you may also check Random Forest among classifiers; it is quite fast, scales nice and does not need complex tuning.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:48:28.567 | 2010-07-19T21:48:28.567 | null | null | null | null |
144 | 2 | null | 138 | 18 | null | [Quick-R](http://www.statmethods.net/index.html) can be a good place to start.
A little bit data mining oriented [R and Data Mining](http://www.rdatamining.com) resources: [Examples and Case Studies](http://www.rdatamining.com/docs/r-and-data-mining-examples-and-case-studies) and [R Reference Card for Data Mining](http... | null | CC BY-SA 3.0 | null | 2010-07-19T21:48:52.670 | 2015-07-04T01:14:16.383 | 2015-07-04T01:14:16.383 | 43755 | 22 | null |
145 | 1 | 147 | null | 6 | 2607 | >
Possible Duplicate:
Locating freely available data samples
Where can I find freely accessible data sources?
I'm thinking of sites like
- http://www2.census.gov/census_2000/datasets/?
| Free Dataset Resources? | CC BY-SA 2.5 | null | 2010-07-19T21:50:16.260 | 2010-08-30T15:02:00.623 | 2017-04-13T12:44:54.643 | -1 | 138 | [
"dataset"
] |
146 | 1 | 149 | null | 15 | 14458 | A while ago a user on R-help mailing list asked about the soundness of using PCA scores in a regression. The user is trying to use some PC scores to explain variation in another PC (see full discussion [here](http://r.789695.n4.nabble.com/PCA-and-Regression-td2280038.html)). The answer was that no, this is not sound be... | Can one use multiple regression to predict one principal component (PC) from several other PCs? | CC BY-SA 3.0 | null | 2010-07-19T21:52:51.707 | 2014-12-12T11:50:37.933 | 2014-12-12T11:50:37.933 | 28666 | 144 | [
"regression",
"pca"
] |
147 | 2 | null | 145 | 6 | null | Amazon has free Public Data sets for use with EC2.
[http://aws.amazon.com/publicdatasets/](http://aws.amazon.com/publicdatasets/)
Here's a list: [http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243](http://developer.amazonwebservices.com/connect/kbcategory.jspa?categoryID=243)
| null | CC BY-SA 2.5 | null | 2010-07-19T21:53:02.283 | 2010-07-19T21:53:02.283 | null | null | 142 | null |
148 | 2 | null | 145 | 3 | null | [http://infochimps.org/](http://infochimps.org/) - is a good resource for free data sets.
| null | CC BY-SA 2.5 | null | 2010-07-19T21:58:51.867 | 2010-07-19T21:58:51.867 | null | null | 130 | null |
149 | 2 | null | 146 | 12 | null | A principal component is a weighted linear combination of all your factors (X's).
example: PC1 = 0.1X1 + 0.3X2
There will be one component for each factor (though in general a small number are selected).
The components are created such that they have zero correlation (are orthogonal), by design.
Therefore, component P... | null | CC BY-SA 3.0 | null | 2010-07-19T22:02:10.340 | 2012-01-04T06:55:32.167 | 2012-01-04T06:55:32.167 | 74 | 74 | null |
150 | 2 | null | 125 | 7 | null | For complete beginners, try William Briggs [Breaking the Law of Averages: Real-Life Probability and Statistics in Plain English](http://rads.stackoverflow.com/amzn/click/0557019907)
| null | CC BY-SA 2.5 | null | 2010-07-19T22:13:29.830 | 2010-07-19T22:13:29.830 | null | null | 25 | null |
151 | 2 | null | 118 | 246 | null | If the goal of the standard deviation is to summarise the spread of a symmetrical data set (i.e. in general how far each datum is from the mean), then we need a good method of defining how to measure that spread.
The benefits of squaring include:
- Squaring always gives a non-negative value, so the sum will always be ... | null | CC BY-SA 4.0 | null | 2010-07-19T22:31:12.830 | 2022-11-23T10:16:14.803 | 2022-11-23T10:16:14.803 | 362671 | 81 | null |
152 | 1 | 1087 | null | 18 | 5938 | Label switching (i.e., the posterior distribution is invariant to switching component labels) is a problematic issue when using MCMC to estimate mixture models.
- Is there a standard (as in widely accepted) methodology to deal with the issue?
- If there is no standard approach then what are the pros and cons of the ... | Is there a standard method to deal with label switching problem in MCMC estimation of mixture models? | CC BY-SA 2.5 | null | 2010-07-19T22:37:38.013 | 2023-01-31T11:20:51.193 | 2011-03-27T16:03:35.180 | 919 | null | [
"bayesian",
"markov-chain-montecarlo",
"mixture-distribution"
] |
153 | 2 | null | 10 | 14 | null | The simple answer is that Likert scales are always ordinal. The intervals between positions on the scale are monotonic but never so well-defined as to be numerically uniform increments.
That said, the distinction between ordinal and interval is based on the specific demands of the analysis being performed. Under specia... | null | CC BY-SA 2.5 | null | 2010-07-19T22:39:27.230 | 2010-07-19T22:39:27.230 | null | null | 145 | null |
154 | 2 | null | 1 | 33 | null | I am currently researching the trial roulette method for my masters thesis as an elicitation technique. This is a graphical method that allows an expert to represent her subjective probability distribution for an uncertain quantity.
Experts are given counters (or what one can think of as casino chips) representing equa... | null | CC BY-SA 4.0 | null | 2010-07-19T22:40:47.947 | 2018-12-29T18:42:01.680 | 2018-12-29T18:42:01.680 | 79696 | 108 | null |
155 | 1 | null | null | 37 | 8116 | I really enjoy hearing simple explanations to complex problems. What is your favorite analogy or anecdote that explains a difficult statistical concept?
My favorite is [Murray's](http://www-stat.wharton.upenn.edu/~steele/Courses/434/434Context/Co-integration/Murray93DrunkAndDog.pdf) explanation of cointegration using a... | What is your favorite layman's explanation for a difficult statistical concept? | CC BY-SA 2.5 | null | 2010-07-19T22:43:50.967 | 2013-10-23T15:29:05.390 | 2012-04-04T16:22:03.290 | 8489 | 154 | [
"teaching",
"communication"
] |
156 | 1 | 198 | null | 4 | 271 | I know this must be standard material, but I had difficulty in finding a proof in this form.
Let $e$ be a standard white Gaussian vector of size $N$. Let all the other matrices in the following be constant.
Let $v = Xy + e$, where $X$ is an $N\times L$ matrix and $y$ is an $N\times 1$ vector, and let
$$\left\{\begin{a... | How to get to a t variable from linear regression | CC BY-SA 3.0 | null | 2010-07-19T22:50:13.297 | 2012-05-15T04:52:05.677 | 2012-05-14T21:49:03.273 | 10515 | 148 | [
"regression"
] |
157 | 2 | null | 155 | 10 | null | Definitely the Monty Hall Problem. [http://en.wikipedia.org/wiki/Monty_Hall_problem](http://en.wikipedia.org/wiki/Monty_Hall_problem)
| null | CC BY-SA 2.5 | null | 2010-07-19T22:52:22.730 | 2010-07-19T22:52:22.730 | null | null | 36 | null |
159 | 2 | null | 103 | 9 | null | [Junk Charts](http://junkcharts.typepad.com/) is always interesting and thought-provoking, usually providing both criticism of visualizations in the popular media and suggestions for improvements.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:00:30.737 | 2010-07-19T23:00:30.737 | null | null | 145 | null |
160 | 2 | null | 103 | 2 | null | [Dataspora](https://web.archive.org/web/20120102015341/http://dataspora.com/blog/), a data science blog.
| null | CC BY-SA 4.0 | null | 2010-07-19T23:06:43.987 | 2022-11-29T16:32:39.563 | 2022-11-29T16:32:39.563 | 362671 | 158 | null |
161 | 1 | null | null | 20 | 15068 | Econometricians often talk about a time series being integrated with order k, I(k). k being the minimum number of differences required to obtain a stationary time series.
What methods or statistical tests can be used to determine, given a level of confidence, the order of integration of a time series?
| What methods can be used to determine the Order of Integration of a time series? | CC BY-SA 2.5 | null | 2010-07-19T23:11:36.240 | 2010-07-20T11:14:41.487 | 2010-07-19T23:39:49.573 | 159 | 154 | [
"time-series"
] |
162 | 2 | null | 155 | 15 | null |
- If you carved your distribution (histogram) out
of wood, and tried to balance it on
your finger, the balance point would
be the mean, no matter the shape of the distribution.
- If you put a stick in the middle of
your scatter plot, and attached the
stick to each data point with a
spring, the resting point of the
st... | null | CC BY-SA 3.0 | null | 2010-07-19T23:13:32.150 | 2012-04-09T08:18:37.273 | 2012-04-09T08:18:37.273 | 74 | 74 | null |
164 | 2 | null | 145 | 3 | null | For governmental data:
US: [http://www.data.gov/](http://www.data.gov/)
World: [http://www.guardian.co.uk/world-government-data](http://www.guardian.co.uk/world-government-data)
| null | CC BY-SA 2.5 | null | 2010-07-19T23:19:44.963 | 2010-07-19T23:19:44.963 | null | null | 158 | null |
165 | 1 | 207 | null | 275 | 183632 | Maybe the concept, why it's used, and an example.
| How would you explain Markov Chain Monte Carlo (MCMC) to a layperson? | CC BY-SA 3.0 | null | 2010-07-19T23:21:05.320 | 2022-12-31T01:32:21.020 | 2017-08-10T08:21:26.363 | 11887 | 74 | [
"bayesian",
"markov-chain-montecarlo",
"intuition",
"teaching"
] |
166 | 1 | null | null | 16 | 56719 | Australia is currently having an election and understandably the media reports new political poll results daily. In a country of 22 million what percentage of the population would need to be sampled to get a statistically valid result?
Is it possible that using too large a sample could affect the results, or does stati... | How do you decide the sample size when polling a large population? | CC BY-SA 2.5 | null | 2010-07-19T23:21:35.430 | 2018-11-06T22:19:57.360 | 2010-09-17T13:20:58.950 | 442 | 154 | [
"sample-size",
"polling"
] |
167 | 2 | null | 146 | 10 | null | Principal components are orthogonal by definition, so any pair of PCs will have zero correlation.
However, PCA can be used in regression if there are a large number of explanatory variables. These can be reduced to a small number of principal components and used as predictors in a regression.
| null | CC BY-SA 2.5 | null | 2010-07-19T23:26:31.473 | 2010-07-19T23:26:31.473 | null | null | 159 | null |
168 | 1 | 179 | null | 30 | 6882 | For univariate kernel density estimators (KDE), I use Silverman's rule for calculating $h$:
\begin{equation}
0.9 \min(sd, IQR/1.34)\times n^{-0.2}
\end{equation}
What are the standard rules for multivariate KDE (assuming a Normal kernel).
| Choosing a bandwidth for kernel density estimators | CC BY-SA 2.5 | null | 2010-07-19T23:26:44.747 | 2017-12-26T08:55:18.090 | 2015-04-23T05:51:56.433 | 9964 | 8 | [
"smoothing",
"kernel-smoothing"
] |
169 | 2 | null | 145 | 4 | null | For time series data, try the [Time Series Data Library](http://robjhyndman.com/TSDL).
| null | CC BY-SA 2.5 | null | 2010-07-19T23:27:36.400 | 2010-07-19T23:27:36.400 | null | null | 159 | null |
170 | 1 | 174 | null | 132 | 44159 | Are there any free statistical textbooks available?
| Free statistical textbooks | CC BY-SA 2.5 | null | 2010-07-19T23:29:54.663 | 2023-06-02T12:01:00.867 | 2010-10-27T09:23:26.720 | 69 | 8 | [
"teaching",
"references"
] |
171 | 2 | null | 161 | 16 | null | There are a number of statistical tests (known as "unit root tests") for dealing with this problem. The most popular is probably the "Augmented Dickey-Fuller" (ADF) test, although the Phillips-Perron (PP) test and the KPSS test are also widely used.
Both the ADF and PP tests are based on a null hypothesis of a unit ro... | null | CC BY-SA 2.5 | null | 2010-07-19T23:32:30.337 | 2010-07-19T23:32:30.337 | null | null | 159 | null |
172 | 2 | null | 166 | 14 | null | Sample size doesn't much depend on the population size, which is counter-intuitive to many.
Most polling companies use 400 or 1000 people in their samples.
There is a reason for this:
A sample size of 400 will give you a confidence interval of +/-5% 19 times out of 20 (95%)
A sample size of 1000 will give you a confide... | null | CC BY-SA 2.5 | null | 2010-07-19T23:34:18.163 | 2010-09-11T18:48:28.197 | 2010-09-11T18:48:28.197 | 74 | 74 | null |
173 | 1 | null | null | 23 | 10125 | I recently started working for a tuberculosis clinic. We meet periodically to discuss the number of TB cases we're currently treating, the number of tests administered, etc. I'd like to start modeling these counts so that we're not just guessing whether something is unusual or not. Unfortunately, I've had very littl... | Time series for count data, with counts < 20 | CC BY-SA 3.0 | null | 2010-07-19T23:37:22.980 | 2017-02-27T14:37:11.170 | 2017-02-27T14:37:11.170 | 11887 | 71 | [
"r",
"time-series",
"poisson-distribution",
"count-data",
"epidemiology"
] |
174 | 2 | null | 170 | 78 | null | Online books include
- http://davidmlane.com/hyperstat/
- http://vassarstats.net/textbook/
- https://dwstockburger.com/Multibook/mbk.htm
- https://web.archive.org/web/20180122061046/http://bookboon.com/en/statistics-ebooks
- http://www.freebookcentre.net/SpecialCat/Free-Statistics-Books-Download.html
Update: I c... | null | CC BY-SA 4.0 | null | 2010-07-19T23:37:43.807 | 2023-06-02T11:48:32.663 | 2023-06-02T11:48:32.663 | 362671 | 159 | null |
175 | 1 | null | null | 93 | 218140 | Often times a statistical analyst is handed a set dataset and asked to fit a model using a technique such as linear regression. Very frequently the dataset is accompanied with a disclaimer similar to "Oh yeah, we messed up collecting some of these data points -- do what you can".
This situation leads to regression fit... | How should outliers be dealt with in linear regression analysis? | CC BY-SA 2.5 | null | 2010-07-19T23:39:49.730 | 2020-09-18T08:21:19.847 | 2010-08-13T12:59:06.957 | 159 | 13 | [
"regression",
"outliers"
] |
176 | 2 | null | 22 | 52 | null | Let us say a man rolls a six sided die and it has outcomes 1, 2, 3, 4, 5, or 6. Furthermore, he says that if it lands on a 3, he'll give you a free text book.
Then informally:
The Frequentist would say that each outcome has an equal 1 in 6 chance of occurring. She views probability as being derived from long run freque... | null | CC BY-SA 3.0 | null | 2010-07-19T23:40:01.007 | 2011-09-18T10:09:48.690 | 2011-09-18T10:09:48.690 | 81 | 81 | null |
177 | 2 | null | 175 | 39 | null | Rather than exclude outliers, you can use a robust method of regression. In R, for example, the [rlm() function from the MASS package](http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/rlm.html) can be used instead of the `lm()` function. The method of estimation can be tuned to be more or less robust to outlier... | null | CC BY-SA 3.0 | null | 2010-07-19T23:45:44.677 | 2011-10-10T09:02:51.173 | 2011-10-10T09:02:51.173 | 159 | 159 | null |
178 | 2 | null | 3 | 11 | null | [RapidMiner](http://rapid-i.com/) for data and text mining
| null | CC BY-SA 3.0 | null | 2010-07-19T23:48:50.943 | 2013-04-20T07:21:26.257 | 2013-04-20T07:21:26.257 | 74 | 74 | null |
179 | 2 | null | 168 | 21 | null | For a univariate KDE, you are better off using something other than Silverman's rule which is based on a normal approximation. One excellent approach is the Sheather-Jones method, easily implemented in R; for example,
```
plot(density(precip, bw="SJ"))
```
The situation for multivariate KDE is not so well studied, and... | null | CC BY-SA 2.5 | null | 2010-07-19T23:59:29.487 | 2010-07-19T23:59:29.487 | null | null | 159 | null |
180 | 2 | null | 145 | 5 | null | I really like the [FRED](http://research.stlouisfed.org/fred2/), from the St. Louis Fed (economics data). You can chart the series or more than one series, you can do some transformations to your data and chart it, and the NBER recessions are shaded.
| null | CC BY-SA 2.5 | null | 2010-07-20T00:06:20.580 | 2010-07-20T00:06:20.580 | null | null | 90 | null |
181 | 1 | 1097 | null | 794 | 1053273 | Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I'm interested in automated ways of building neural networks.
| How to choose the number of hidden layers and nodes in a feedforward neural network? | CC BY-SA 3.0 | null | 2010-07-20T00:15:02.920 | 2022-08-31T12:09:15.680 | 2017-03-15T17:51:15.800 | 153217 | 159 | [
"model-selection",
"neural-networks"
] |
182 | 2 | null | 175 | 30 | null | Sometimes outliers are bad data, and should be excluded, such as typos. Sometimes they are Wayne Gretzky or Michael Jordan, and should be kept.
Outlier detection methods include:
Univariate -> boxplot. outside of 1.5 times inter-quartile range is an outlier.
Bivariate -> scatterplot with confidence ellipse. outside of... | null | CC BY-SA 2.5 | null | 2010-07-20T00:15:47.393 | 2010-09-09T00:10:56.520 | 2010-09-09T00:10:56.520 | 74 | 74 | null |
183 | 1 | 518 | null | 4 | 1909 | I need to analyze the 100k MovieLens dataset for clustering with two algorithms of my choice, between the likes of k-means, agnes, diana, dbscan, and several others. What tools (like Rattle, or Weka) would be best suited to help me make some simple clustering analysis over this dataset?
| What tools could be used for applying clustering algorithms on MovieLens? | CC BY-SA 2.5 | null | 2010-07-20T00:20:51.767 | 2013-07-15T11:25:42.467 | null | null | 166 | [
"clustering"
] |
184 | 2 | null | 33 | 7 | null | Try using the `stl()` function for time series decomposition. It provides a very flexible method for extracting a seasonal component from a time series.
| null | CC BY-SA 2.5 | null | 2010-07-20T00:21:58.193 | 2010-07-20T00:21:58.193 | null | null | 159 | null |
185 | 2 | null | 124 | 11 | null | A great introductory text covering the topics you mentioned is [Introduction to Information Retrieval](http://www.informationretrieval.org), which is available online in full text for free.

| null | CC BY-SA 4.0 | null | 2010-07-20T00:30:00.173 | 2018-12-30T19:39:56.940 | 2018-12-30T19:39:56.940 | 79696 | 80 | null |
187 | 2 | null | 181 | 16 | null | As far as I know there is no way to select automatically the number of layers and neurons in each layer. But there are networks that can build automatically their topology, like EANN (Evolutionary Artificial Neural Networks, which use Genetic Algorithms to evolved the topology).
There are several approaches, a more or ... | null | CC BY-SA 3.0 | null | 2010-07-20T00:47:45.310 | 2017-02-20T16:03:33.397 | 2017-02-20T16:03:33.397 | 128677 | 119 | null |
188 | 2 | null | 165 | 94 | null | I'd probably say something like this:
"Anytime we want to talk about probabilities, we're really integrating a density. In Bayesian analysis, a lot of the densities we come up with aren't analytically tractable: you can only integrate them -- if you can integrate them at all -- with a great deal of suffering. So what... | null | CC BY-SA 3.0 | null | 2010-07-20T00:52:13.287 | 2015-02-16T06:06:58.363 | 2015-02-16T06:06:58.363 | 57408 | 61 | null |
189 | 2 | null | 23 | 12 | null | Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small.
| null | CC BY-SA 3.0 | null | 2010-07-20T00:59:34.643 | 2014-12-03T01:21:36.467 | 2014-12-03T01:21:36.467 | 5339 | 173 | null |
190 | 2 | null | 170 | 14 | null | [A New View of Statistics](http://www.sportsci.org/resource/stats/) by Will G. Hopkins is great! It is designed to help you understand how to understand the results of statistical analyses, not how to prove statistical theorems.
| null | CC BY-SA 3.0 | null | 2010-07-20T01:07:38.383 | 2015-03-02T00:01:35.840 | 2015-03-02T00:01:35.840 | 25 | 25 | null |
191 | 2 | null | 128 | 12 | null | The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey [http://en.wikipedia.org/wiki/John_Tukey](http://en.wikipedia.org/wiki/John_Tukey)).
The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result an... | null | CC BY-SA 2.5 | null | 2010-07-20T01:17:17.377 | 2010-07-20T01:17:17.377 | null | null | 173 | null |
192 | 1 | 293 | null | 5 | 4107 | I'm aware that this one is far from yes or no question, but I'd like to know which techniques do you prefer in categorical data analysis - i.e. cross tabulation with two categorical variables.
I've come up with:
- χ2 test - well, this is quite self-explanatory
Fisher's exact test - when n < 40,
Yates' continuity cor... | Cross tabulation of two categorical variables: recommended techniques | CC BY-SA 2.5 | null | 2010-07-20T01:18:11.523 | 2020-11-05T10:10:06.100 | 2020-10-30T16:05:03.157 | 930 | 1356 | [
"categorical-data",
"contingency-tables",
"association-measure"
] |
193 | 2 | null | 166 | 9 | null | Suppose that you want to know what percentage of people would vote for a particular candidate (say, $\pi$, note that by definition $\pi$ is between 0 and 100). You sample $N$ voters at random to find out how they would vote and your survey of these $N$ voters tells you that the percentage is $p$. So, you would like to ... | null | CC BY-SA 3.0 | null | 2010-07-20T01:45:12.020 | 2016-04-08T20:00:37.107 | 2016-04-08T20:00:37.107 | -1 | null | null |
194 | 1 | 200 | null | 9 | 1387 | I am sure that everyone who's trying to find patterns in historical stock market data or betting history would like to know about this. Given a huge sets of data, and thousands of random variables that may or may not affect it, it makes sense to ask any patterns that you extract out from the data are indeed true patter... | Data Mining-- how to tell whether the pattern extracted is meaningful? | CC BY-SA 4.0 | null | 2010-07-20T01:47:36.197 | 2022-05-15T06:03:20.027 | 2022-05-15T06:03:20.027 | 175 | 175 | [
"data-mining"
] |
195 | 1 | 2872 | null | 8 | 1408 | I am looking at fitting distributions to data (with a particular focus on the tail) and am leaning towards Anderson-Darling tests rather than Kolmogorov-Smirnov. What do you think are the relative merits of these or other tests for fit (e.g. Cramer-von Mises)?
| What do you think is the best goodness of fit test? | CC BY-SA 2.5 | null | 2010-07-20T02:01:05.727 | 2010-09-20T00:29:57.047 | null | null | 173 | [
"hypothesis-testing",
"fitting"
] |
196 | 1 | 197 | null | 31 | 14239 | Besides [gnuplot](http://en.wikipedia.org/wiki/Gnuplot) and [ggobi](http://www.ggobi.org/), what open source tools are people using for visualizing multi-dimensional data?
Gnuplot is more or less a basic plotting package.
Ggobi can do a number of nifty things, such as:
- animate data along a dimension or among discre... | Open source tools for visualizing multi-dimensional data? | CC BY-SA 3.0 | null | 2010-07-20T02:17:24.800 | 2016-07-29T02:59:10.510 | 2012-11-21T06:25:07.173 | 9007 | 87 | [
"data-visualization",
"open-source"
] |
197 | 2 | null | 196 | 13 | null | How about R with [ggplot2](http://had.co.nz/ggplot2/)?
Other tools that I really like:
- Processing
- Prefuse
- Protovis
| null | CC BY-SA 2.5 | null | 2010-07-20T02:24:38.993 | 2010-07-20T02:42:01.603 | 2010-07-20T02:42:01.603 | 5 | 5 | null |
198 | 2 | null | 156 | 4 | null | Start with the distribution of $\bar{y}$, show that since $v$ is normal, $\bar{y}$ is multivariate normal and that consequently $u$ must also be a multivariate normal; also show that the covariance matrix of $\bar{y}$ is of the form $\sigma^2\cdot(X^T X)^{-1}$ and thus -- if $\sigma^2$ were known -- the variance of $u$... | null | CC BY-SA 3.0 | null | 2010-07-20T02:32:43.653 | 2012-05-15T04:52:05.677 | 2012-05-15T04:52:05.677 | 183 | 61 | null |
199 | 2 | null | 194 | 6 | null | You could try:
- Bagging http://en.m.wikipedia.org/wiki/Bootstrap_aggregating
- Boosting http://en.m.wikipedia.org/wiki/Boosting
- Cross validation http://en.m.wikipedia.org/wiki/Cross-validation_(statistics)
| null | CC BY-SA 2.5 | null | 2010-07-20T02:32:53.093 | 2010-07-20T02:32:53.093 | null | null | 5 | null |
200 | 2 | null | 194 | 16 | null | If you want to know that a pattern is meaningful, you need to show what it actually means. Statistical tests do not do this. Unless your data can be said to be in some sense "complete", inferences draw from the data will always be provisional.
You can increase your confidence in the validity of a pattern by testing aga... | null | CC BY-SA 2.5 | null | 2010-07-20T02:48:45.177 | 2012-08-20T10:05:15.320 | 2012-08-20T10:05:15.320 | 174 | 174 | null |
201 | 2 | null | 7 | 10 | null | Start R and type `data()`. This will show all datasets in the search path.
Many additional datasets are available in add-on packages.
For example, there are some interesting real-world social science datasets in the `AER` package.
| null | CC BY-SA 2.5 | null | 2010-07-20T03:11:36.027 | 2010-07-20T03:11:36.027 | null | null | 183 | null |
202 | 2 | null | 138 | 13 | null |
- If you like learning through videos, I collated a list of R training videos.
- I also prepared a general post on learning R with suggestions on books, online manuals, blogs, videos, user interfaces, and more.
| null | CC BY-SA 3.0 | null | 2010-07-20T03:13:22.953 | 2011-05-27T03:38:47.843 | 2011-05-27T03:38:47.843 | 183 | 183 | null |
203 | 1 | null | null | 23 | 27540 | Following on from [this question](https://stats.stackexchange.com/questions/10/under-what-conditions-should-likert-scales-be-used-as-ordinal-or-interval-data):
Imagine that you want to test for differences in central tendency between two groups (e.g., males and females)
on a 5-point Likert item (e.g., satisfaction with... | Group differences on a five point Likert item | CC BY-SA 2.5 | null | 2010-07-20T03:31:45.820 | 2018-10-11T11:08:24.087 | 2017-04-13T12:44:32.747 | -1 | 183 | [
"t-test",
"ordinal-data",
"likert",
"scales"
] |
204 | 2 | null | 196 | 11 | null | The lattice package in R.
>
Lattice is a powerful and elegant high-level data visualization
system, with an emphasis on multivariate data,that is sufficient for
typical graphics needs, and is also flexible enough to handle most
nonstandard requirements.
[Quick-R has a quick introduction](http://www.statmethods.n... | null | CC BY-SA 3.0 | null | 2010-07-20T03:35:58.693 | 2012-11-21T06:38:02.093 | 2012-11-21T06:38:02.093 | 183 | 183 | null |
205 | 1 | 353 | null | 23 | 2446 | I'm curious about why we treat fitting GLMS as though they were some special optimization problem. Are they? It seems to me that they're just maximum likelihood, and that we write down the likelihood and then ... we maximize it! So why do we use Fisher scoring instead of any of the myriad of optimization schemes tha... | Why do we make a big fuss about using Fisher scoring when we fit a GLM? | CC BY-SA 2.5 | null | 2010-07-20T03:51:24.050 | 2021-11-18T13:19:22.933 | 2021-11-18T13:19:22.933 | 11887 | 187 | [
"generalized-linear-model",
"optimization",
"history",
"fisher-scoring"
] |
206 | 1 | 209 | null | 73 | 1110763 | What is the difference between discrete data and continuous data?
| What is the difference between discrete data and continuous data? | CC BY-SA 3.0 | null | 2010-07-20T03:53:54.767 | 2020-01-30T17:24:38.353 | 2011-05-27T03:35:28.263 | 183 | 188 | [
"continuous-data",
"discrete-data"
] |
207 | 2 | null | 165 | 256 | null | First, we need to understand what is a Markov chain. Consider the following [weather](http://en.wikipedia.org/wiki/Examples_of_Markov_chains#A_very_simple_weather_model) example from Wikipedia. Suppose that weather on any given day can be classified into two states only: sunny and rainy. Based on past experience, we kn... | null | CC BY-SA 3.0 | null | 2010-07-20T04:00:14.387 | 2016-08-18T18:05:16.030 | 2016-08-18T18:05:16.030 | 67822 | null | null |
208 | 2 | null | 206 | 13 | null | Temperatures are continuous. It can be 23 degrees, 23.1 degrees, 23.100004 degrees.
Sex is discrete. You can only be male or female (in classical thinking anyways). Something you could represent with a whole number like 1, 2, etc
The difference is important as many statistical and data mining algorithms can handle one... | null | CC BY-SA 3.0 | null | 2010-07-20T04:07:11.093 | 2017-06-28T19:16:14.737 | 2017-06-28T19:16:14.737 | 74 | 74 | null |
209 | 2 | null | 206 | 64 | null | Discrete data can only take particular values. There may potentially be an infinite number of those values, but each is distinct and there's no grey area in between. Discrete data can be numeric -- like numbers of apples -- but it can also be categorical -- like red or blue, or male or female, or good or bad.
Continuou... | null | CC BY-SA 4.0 | null | 2010-07-20T04:16:52.663 | 2020-01-30T17:24:38.353 | 2020-01-30T17:24:38.353 | 272284 | 174 | null |
210 | 2 | null | 206 | 27 | null | Data is always discrete. Given a sample of `n` values on a variable, the maximum number of distinct values the variable can take is equal to `n`. See this quote
>
All actual sample spaces are discrete, and all observable random
variables have discrete distributions. The continuous distribution is
a mathematical ... | null | CC BY-SA 3.0 | null | 2010-07-20T04:19:12.023 | 2012-03-30T05:31:05.047 | 2012-03-30T05:31:05.047 | 183 | 183 | null |
211 | 2 | null | 138 | 7 | null | I have written a document that is freely available at my website and on CRAN. See the linked page:
[icebreakeR](http://www.ms.unimelb.edu.au/~andrewpr/r-users/)
The datasets that are used in the document are also linked from that page. Feedback is welcome and appreciated!
Andrew
| null | CC BY-SA 2.5 | null | 2010-07-20T04:49:07.680 | 2010-07-20T04:49:07.680 | null | null | 187 | null |
212 | 1 | 5001 | null | 5 | 1609 | I have 2 ASR (Automatic Speech Recognition) models, providing me with text transcriptions for my testdata. The error measure I use is Word Error Rate.
What methods do I have to test for statistical significance of my new results?
An example:
I have an experiment with 10 speaker, all having 100 (the same) sentences, tot... | What method to use to test Statistical Significance of ASR results | CC BY-SA 2.5 | null | 2010-07-20T04:54:20.793 | 2010-11-29T18:25:11.713 | 2010-07-21T06:19:29.143 | 190 | 190 | [
"statistical-significance"
] |
213 | 1 | 532 | null | 103 | 68364 | Suppose I have a large set of multivariate data with at least three variables. How can I find the outliers? Pairwise scatterplots won't work as it is possible for an outlier to exist in 3 dimensions that is not an outlier in any of the 2 dimensional subspaces.
I am not thinking of a regression problem, but of true mult... | What is the best way to identify outliers in multivariate data? | CC BY-SA 2.5 | null | 2010-07-20T05:02:33.793 | 2019-05-16T14:50:42.977 | 2016-08-20T15:26:22.127 | 28666 | 159 | [
"multivariate-analysis",
"outliers"
] |
214 | 2 | null | 170 | 8 | null | Some free Stats textbooks are also available [here](http://www.e-booksdirectory.com/mathematics.php).
| null | CC BY-SA 2.5 | null | 2010-07-20T05:02:42.573 | 2010-07-20T05:02:42.573 | null | null | 40 | null |
215 | 2 | null | 195 | 2 | null | I'm not sure about these tests, so this answer may be off-topic. Apologies if so. But, are you sure that you want a test? It really depends on what the purpose of the exercise is. Why are you fitting the distributions to the data, and what will you do with the fitted distributions afterward?
If you want to know ... | null | CC BY-SA 2.5 | null | 2010-07-20T05:03:12.730 | 2010-07-20T05:03:12.730 | null | null | 187 | null |
216 | 1 | 217 | null | 10 | 719 | What are some good visualization libraries for online use? Are they easy to use and is there good documentation?
| Web visualization libraries | CC BY-SA 3.0 | null | 2010-07-20T05:04:40.840 | 2017-11-23T14:22:40.880 | 2017-11-23T08:47:55.583 | 11887 | 191 | [
"data-visualization",
"protovis"
] |
217 | 2 | null | 216 | 7 | null | IMO, [Protovis](http://vis.stanford.edu/protovis/) is the best and is very well documented and supported. It is the basis for my [webvis](http://cran.r-project.org/web/packages/webvis/index.html) R package.
These are also very good, although they have more of a learning curve:
- Processing
- Prefuse
| null | CC BY-SA 2.5 | null | 2010-07-20T05:10:08.383 | 2010-07-20T05:15:51.977 | 2010-07-20T05:15:51.977 | 5 | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.