Online Sequencer Forums

Full Version: Advertised Sequences and View Counts
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Pages: 1 2
Advertised Sequences and View Counts
A Statistical Inference to Answer an OS-Related Question

Welcome to what might be the one time I actually get a thread I made pinned! But on a more serious note, this was a huge project I did, and will hopefully be useful to other users as well. While I'll try and make it as easy to understand as possible (as I know that there are a multitude of users who don't know statistics), there are a few things that I'll assume that you already know.

Since I have a feeling that a majority of OS users will just TL;DR this, I'll mark important sections like this.

Assumed Knowledge
I will be assuming that you already have knowledge on the things listed below:
Addition, subtraction, multiplication, division, fractions, percentages, variables, linear functions, PEMDAS, inequalities, radicals, and averages/means. Also you should know a little about experiments.

Now that that's out of the way, I'll give you a quick rundown on many of the different terms and symbols that you might find in this thread.
  • Population: the entire set of individuals of whom we hope to learn something about. In this case the population would be all sequences on OS.
  • Sample: a selected part of a population which we have data on and can use to make inferences about the population.
  • Standard Deviation: a measurement of how much the data tends to vary. It's not inaccurate to call it the average variation.
  • Standard Error: no, this doesn't mean that an error has been committed. This is when we estimate the standard deviation of something using the data we have at hand. 
  • Distribution: in this sense, a distribution is how the data are arranged relative to each other and some quantifiable measurement. I can't really explain it very well.
  • Unimodal (of a distribution): having one peak.
  • Bell-Shaped: https://www.google.com/search?q=bell+sha...9wKVieZGgM
  • Normal (of a distribution): https://www.google.com/search?q=normal+d...NHmAY_dmfM:
  • Symmetric (of a distribution): having symmetrical look about the mean of the distribution. 
  • _A=A symbol with a subscript "A" (in this case) refers to the group of sequences that were advertised.
  • _N=A symbol with the subscript "N"(in this case) refers to the group of sequences that were not advertised.
  • µ= this fancy symbol (pronounced "mew") refers to the mean of a population.
  • x̄= this refers to the mean of a sample.
  • s= this refers to the standard deviation of a sample.
  • n= this simply refers to the number of individuals in a sample.
  • µx̄A-x̄N= this one's a bit more complicated. Let's start simple. Every time you select a few individuals from a population and collect data on those individuals, you will produce slightly different results. For example, if I chose 10 people and tested how many push-ups they could do in one sitting, the mean number of pushups they could do would be different from the mean number of pushups another 10 people could do. There is a distribution for this, which shows us the probability of getting different results from different samples. It is called a sampling distribution. This symbol represents the mean of a sampling distribution for the difference in the means: A-N.
  • SE(A-N)= this is the standard error for the distribution described in the previous sentence.
  • t-model: a giant group of distributions that are all unimodal and bell-shaped. The distributions all vary based on the number of degrees of freedom (it's not necessary to explain what degrees of freedom are).
  • Hypothesis test: a test which tests a hypothesis using data.
  • 2-sample t-test: a hypothesis test using a t-model.
  • Null hypothesis (H0): this is the assumed hypothesis. For a 2-sample t-test, it usually states that there is no difference between the means.
  • t-ratio: the test statistic for a hypothesis test. It lets us determine a P-value (below).
  • P-value: the probability of getting the data we did from the sample if the null hypothesis was actually true.
  • Alternative hypothesis (HA): a small enough P-value will cause us to reject the null hypothesis. The alternative hypothesis is what we accept if we reject the null hypothesis. Depending on what we're looking for, it could say that there is a difference (in general) or it could say the difference in a specific direction. Also: in this case, while it does have the subscript "A," it is not specifically referring to the group of sequences that were advertised.
  • Alpha level (α): the boundary that determines when I reject the null hypothesis. If a P-value is smaller than this boundary, then I reject it.
  • Confidence interval: this is a tool used to attempt to estimate the true mean of a population using the sample data we have at hand.
  • 95% Confidence: to put it simply, if you took a large amount of different samples from a population. You'd expect about 95% of them to capture the true mean of the population with their confidence intervals.
Well, I think that's everything. Now onto getting started with what I did.

The Question
Hey! Remember This? https://onlinesequencer.net/forum/showth...p?tid=2655
You might've noticed how there were only 15 of the 30 non-drum kit instruments advertised there. If you were especially curious about why, you might've visited my user page and ran into the other 15 sequencesWell, this wasn't your everyday ideas factory. It was an experiment!

You see, many users (me included) self-advertise their sequences on the OS forums. While I can't be sure of why they do it, I can make a few guesses, such as:
  • To get feedback from other users
  • To organize their music
  • To increase the amount of views their sequences get
Now, this last one begged an interesting question for me, one that can be answered with statistical inference.
Does advertising your sequences on the OS forums actually increase their view count?

The Experiment
So, in order to do this, I set up the experiment. This experiment had 2 treatment groups: the first being 15 sequences that were advertised, and the second being the other 15 sequences that weren't. The 30 sequences were made, and each was assigned a number based on the picture below, which you'll probably recognize as the instrument select dropdown menu. Each sequence used 1 instrument different from the instruments of the other sequences. Each sequence was the same note-wise, although maybe not octave wise (as many instruments have different ranges). This was mostly for efficiency purposes.
[attachment=95]
Each sequence  was assigned a number from 1 to 30 based on the instrument they used. I assigned the numbers from top to bottom, ignoring the drum kits (so electric piano would be 1, grand piano would be 2.....music box would be 5, xylophone would be 6, and so on). I then used a random sequence generator, which randomly divided the 30 numbers up into 2 columns. The numbers in the left column represented the advertised sequences, and the numbers in the right column represented the un-advertised sequences. The output is below.
[attachment=96]

Bias Control
Here's where I got to give my shoutouts to Lucent. Without her these results would've been a lot less reliable. You see, since all of the sequences were identical note-wise, someone would've pointed this out. In order to prevent this, I contacted Lucent and asked if the experiment thread (https://onlinesequencer.net/forum/showth...p?tid=2655) could be made so that no one could reply to this. If someone replied saying that the sequences were all the same then that would definitely effect how (and how much) the sequences would be viewed. If someone did respond, then none of the results from my experiment would've been trustworthy. That's why I had to make sure that it couldn't be replied to.


The Data
Once the experiment was over, the view counts of each sequence were counted up and put into the data table below.
[attachment=97]
Oh yeah, I forgot to show that there was candy in the data.

Hypotheses
There are 2 hypotheses to a hypothesis test, the null hypothesis and alternative hypothesis. The null hypothesis (H0) is that advertising sequences does not increase the view counts of the sequences advertised (µAN or µAN=0). The alternative hypothesis (HA) is that advertising sequences does increase the view counts of the sequences advertised (µAN or µAN>0). The reason that it shows µ is that we want to know if this is true for the population, (as we already know the results for the sample). We can write the hypothesis to show a potential causal relationship because an experiment was performed (There are 2 ways to get data for tests like this, surveying or doing an experiment. An experiment controls for outside factors that can influence data, while surveying can't. So we can imply causation with a well-designed experiment, but not when surveying). You'll see why there is an alternative way of writing hypotheses (using µAN) later on.

Conditions
In order to perform a hypothesis test, some conditions must be met.
  • Independence among the data.
  • Independence between the groups.
  • Randomization was used.
  • The data came from a population that has a Normal distribution or we have a large sample.
While we cannot be sure about the first 2. We can think about how the data was collected to determine whether they might be reasonable to assume true. I do think that it is reasonable to assume that they number of views one sequence doesn't affect that of other sequences and that the number of views the advertised sequences get does not affect the number of views un-advertised sequences get.

We can check to see if randomization was used. In this case, it was.

Since we don't have a large sample, we'll have to check if it is likely that the data came from a population that is Normally distribution. We do this by making histograms of the data (one histogram for each group). See the histograms below.
[attachment=98][attachment=99]

Since both histograms are unimodal and symmetric, we are good to proceed with the test. A t-model (with parameters below) to perform a 2-sample t-test for the difference in means.
Doing the Test
Now lets actually get into the mechanics of doing the hypothesis test.
These are the summary statistics that will be needed to perform a hypothesis test.
[attachment=100]
We will be using the top 3 rows out of the bottom 4 to compute the test statistic (this test uses a t-ratio).
The t-ratio is determined by this equation:
(A-N-(µAN))/SE(A-N).
A-N represents the observed difference in means of the data.
µAN represents what we are hypothesizing to be the true difference in the actually means of the population.
We subtract µAN from A-N because it shows the difference from what we'd expect. It's this difference that we want to test to see if it is significant.
So why do we divide by SE(A-N)? Well, different sets of data from different population will have different means and will vary differently. However, the test being performed uses a single model (or equation, if you will) to determine if a difference is significant. The problem is, while the equation might be good for one scale, it would be terrible for another. Dividing by SE(A-N) "standardizes" the data onto one common scale, allowing the test to use it's "one-size-fits-all" method.
We've know that the observed difference in the data is (A-N) 4.7333333.... If we assume the null hypothesis to be true, then we are assuming that there is no difference between the actual population means (We are assuming that µAN=0. Which is why that alternative way of writing the null hypothesis is used). We can also calculate SE(A-N) to be about 1.04.
This makes the t-ratio (4.7333333....-0)/~1.04≈4.54.
We then use the model this test uses (a t-model) to turn the t-ratio of about 4.54 into a P-value. A P-value is the probability of getting the difference we saw (or a larger difference) in the data if there is no actual difference in the populations.
The area under the curve (where the arrow is pointing; to the right of the bar) shows the P-value. The area is very small. So small, in fact, that you can't even see it.
[attachment=103]
As it turns out the P-value is about .015%, which is extremely miniscule.


Conclusion
The P-value is very small. The probability of seeing a difference of 4.73333... between the means of the data is .015% if the null hypothesis (which is that there is no difference between the true means) is true. I find this probability too small, and reject the null hypothesis (a fancy way of saying: "this probability is so small that I don't think that the null hypothesis is actually correct) in favor of the alternative. These results lead me to believe that advertising your sequences on the OS forums does increase the amount of views that they get.

Well, if you're reading this text, then you have most likely just read through (or at least skimmed or looked at the highlighted sections) this giant wall of text that is me trying to explain a statistical process to you. Thank you for not tl;dr-ing this (I put a lot of effort into it). If you have any questions regarding this inference, please don't hesitate to ask! I'd be happy to clear up any confusions.
Reserved cuz only 5 attachments per post :/
Reserved cuz only 5 attachments per post :/
Reserved cuz only 5 attachments per post :/
Reserved cuz only 5 attachments per post :/
I also like to organize my longer posts by sections, so yeah.
I think I'll leave it here for now. This'll hopefully be enough space. I'll probably get it done tomorrow, but I gtg for tonight. For now, I'll leave you to ponder this question: does advertising sequences actually increase their view count?
OBJECTION!

did you advertise it in chat? tl;dr
(06-02-2018, 08:29 PM)Palpatrump Wrote: [ -> ]OBJECTION!

did you advertise it in chat? tl;dr

No, in the forums.
Pages: 1 2