A few months ago, The Baron argued in a post at Waters of Mormon that a weakness of the MPAA movie rating scheme is that it considers only the movie’s worst content category (of violence, profanity, and sex). For example, if a movie has enough profanity to get an R rating, the R says nothing about its levels of violence or sex. Such a movie could have any combination of levels of violence and sex, from none at all up to enough to warrant an R rating on their own even without the profanity.
The Baron pointed out that this practice of rating movies by only their worst type of content might set up an odd incentive:
this only encourages filmmakers to add more “R-rated” content to their movie, since obviously if they know they’re getting an R for violence already, why NOT add a lot of profanity and nudity as well? The rating is going to be the same, either way
This had never occurred to me, but I can see his argument that the rating system would create this incentive. His unstated assumption, though, is that movie makers want to put as much violence, sex, and profanity into their movies as they possibly can. I doubt that that’s actually the case. While I suspect they probably chafe at times at restrictions that trying to get a particular rating might place on them, I would be surprised if getting lots of offensive material in is often one of their major goals.
So which is true? Are movie makers anxious to put lots of offensive content into their movies, or not? What’s fun about this question is that there’s data I can use to try to answer it.
The Baron’s argument leads to the prediction that there aren’t any (or many) “soft” R rated movies. Makers of movies, once they know a movie is going to get an R rating, will typically put extra offensive material in. So there should be relatively few R rated movies that just barely exceed the R threshold, compared to the larger numbers of PG-13 rated movies that fall just below it and “harder” R rated movies that fall well beyond it.
My competing argument that the PG-13 to R threshold has little effect on movie makers leads to the prediction that there will be no dip in the number of movies at the “soft” end of R ratings.
To test these competing hypotheses, I’ll use data from Kids in Mind, a website that rates movies from 0 to 10 in three categories: sex/nudity, violence/gore, and profanity. As of November 21st, they had ratings for 2883 movies, and their ratings, along with MPAA ratings for the same movies, are the data I’ll be using.
Is the number of “soft” R rated movies smaller than you would expect?
Based on the Kids in Mind (hereafter, KiM) data, no, I don’t think so.
Here’s a figure that tries to answer the question. The scores along the bottom are sums of KiM scores for each movie (sex + violence + profanity) and the height of the bars represent numbers of movies. The colors of the parts of the bars represent movie ratings: G is green, PG is blue, PG-13 is yellow, and R is red. (3 NC-17 and 22 unrated movies are excluded.)
Answering the question is complicated by the fact that KiM sum scores do not predict MPAA ratings perfectly. But the two are clearly strongly related, so I can at least give it a shot.
Notice as you look left to right in the figure that R rated movies start to appear almost as soon as do PG-13 rated movies. The first real concentration of PG-13 rated movies is at a KiM sum of 5. R rated movies appear only a little later at 7. More generally, the KiM sums at which PG-13 rated movies are most common almost all have a fair number of R rated movies at the same sum. Or consider this: the most common KiM sum for a PG-13 rated movie is 13. But over a quarter of movies having that KiM sum are rated R.
If The Baron’s hypothesis were correct, this figure would show a much greater separation between PG-13 and R rated movies, with the buk of the R rated movies not appearing until farther to the right, where the number of PG-13 movies is declining. Instead, the whole set of movies forms a pretty smooth distribution, with lots of movies toward the middle KiM sum ratings, fewer toward the extremes, and no really obvious breaks.
Here’s a table that attempts to answer the same question. For each MPAA rating (other than NC-17), I’ve listed the number of movies receiving that rating as well as the minimum, median, and maximum KiM sums for those movies. In the last two columns are the percentages of movies having that MPAA rating whose KiM sums were less than or equal to the median or maximum KiM sum for the next “softer” MPAA rating. Sorry–I know that’s a mouthful. It might be clearer with an example. If you look at the PG line, it says 19 under the column for the median. This means that 19% of PG rated movies had KiM sums lower than or equal to the median (middle) for the G movies. Where it says 89 in the max column, this means that 89% of PG rated movies had KiM sums lower than or equal to the maximum KiM sum for a G rated movie.
|MPAA rating||Number of movies||KiM sum (sex + violence + profanity)|
|Min||Median||Max||Pct <= softer median||Pct <= softer max|
Since higher numbers in these last two columns indicate more overlap, it looks like the G and PG rated movies overlapped a lot in their KiM sums. PG-13 and R rated movies also overlapped quite a bit (three quarters of R rated movies had a KiM sum no higher than the highest PG-13 rated movie). But the real break is between PG and PG-13. Only a little over half of PG-13 rated movies had a KiM sum less than or equal to the highest KiM sum for PG rated movies. This suggests that the rating categories are pretty different. So in spite of its name, the PG-13 rating appears to be a breakoff from the R rating and not from the PG rating.
Now I’ve strayed a bit from the original question, but once I’ve got a data set like this, why not play with it a little and see what other interesting questions I can try to answer? I understand, though, if you don’t want to continue with me.
How are the levels of sex, violence, and profanity in a movie related?
Here are the correlations1 between the KiM scores for sex, violence, and profanity. I’ve also thrown in MPAA ratings, which I simply entered as G=0, PG=1, PG-13=2, R=3, NC-17=4.
Hmm. So profanity goes with sex and violence more reliably than sex and violence go together. What’s really interesting, I think, is that of the three KiM scores, profanity is by far the most strongly related to MPAA rating. This leads to another interesting question:
Which of sex, violence, and profanity best predicts MPAA rating?
To answer this, I used a regression, which estimates how well each of the KiM ratings predicts MPAA rating, while taking into account that the KiM ratings are overlapping (that is, they are correlated among themselves). Profanity was the clear winner. For each 1 point increase in profanity, the predicted MPAA rating increased 0.17 points. Each 1 point increase in sex only increased the predicted MPAA rating by 0.09 points. Each 1 point increase in violence also increased the predicted MPAA rating by only 0.09 points.
I know these numbers can seem a little detached from the original question. So let’s consider an example. Say there’s a movie that rates 1-1-1 on the KiM sex, violence, and profanity scales. The regression predicts that the MPAA rating would be 0.982, or a standard PG given the 0 to 4 scale I used. What if sex increased 5 points to make a 6-1-1 movie? The regression predicts that the MPAA rating would be 1.433, or halfway between a PG and a PG-13. But what if instead profanity were increased 5 points to make a 1-1-6 movie? Then the regression predicts that the MPAA rating would be 1.834, solidly in PG-13 territory. So changing profanity clearly has a larger effect, even if the difference isn’t dramatic.
In using this regression, I assumed that transitions between ratings all worked the same. That is, I assumed that the three KiM ratings would predict the difference between G and PG the same way they predicted the difference between PG and PG-13. This assumption could be false. In fact, when I checked it using a different kind of regression model, the test of whether the KiM ratings predicted the same for all the adjacent rating category pairs suggested that this assumption was false. So I looked at the pairs separately.
Which of sex, violence, and profanity best predicts each difference between pairs of adjacent MPAA ratings (e.g., G vs. PG)?
To answer this, I used three logistic regressions, one for the G vs. PG difference, one for the PG vs. PG-13 difference, and one for the PG-13 to R difference. (There were so few NC-17′s that I didn’t look at the R vs. NC-17 difference.) This table shows some results from these logistic regressions. The values in the table are odds ratios. This means they tell how much the predicted odds of a movie being in the “harder” category are multiplied by for each one point increase in that particular KiM rating.
|KiM category||Odds ratios|
|PG vs. G||PG-13 vs. PG||R vs. PG-13|
Again, I realize this might be a lot to wrap your brain around, so let’s try an example. Among G and PG movies, a movie having KiM ratings of 0-0-1 for sex, violence, and profanity has predicted odds of 1.19:15 of being rated PG. This translates into a 54% probability: 1.19/2.19 = .54. If sex were increased 1 point, the new odds would just be multiplied by 2.03, yielding 2.42:1, or a 71% probability. Alternatively, if profanity were increased by 1 point, the new odds would be 1.19 x 3.51 = 4.18, or an 81% probability.
Consistent with the results of the regression above, profanity is the best predictor for each pair of ratings. Its strength varies, though, as it’s a stronger predictor for the “softer” ratings pairs and the weakest for the R vs. PG-13 comparison. Violence, on the other hand, is not even a statistically significant predictor of the PG vs. G difference. In other words, it is not likely to be a true predictor at all. (Note that because odds ratios are multiplicative, an odds ratio near 1 means the predicted odds change very little.) It does predict differences for the “harder” pair comparisons, though. Sex, like profanity, predicts better for the “softer” ratings pairs.
So why is violence such a poor predictor of the G vs. PG difference?
The table below has mean KiM scores for each of the three categories for each MPAA rating. It looks like the problem is that violence is pretty similar in G (mean = 2.44) and PG (mean = 2.88) rated movies. Compare this to the much larger increases in mean KiM scores for the other categories as well as for violence beyond PG ratings, with increasingly “hard” MPAA ratings.
|MPAA rating||KiM means|
What’s the stereotype about American vs. Eurpoean movies? Ours have violence and their have sex? This result is consistent with that stereotype: even G rated movies don’t really approach having zero violence. This leads me to one last question:
What’s typically the “hardest” content in movies? Is it really violence?
Yes, it is. Or at least the KiM ratings are consistent with this idea. I checked which of sex, violence, and profanity was highest for each of the 2883 movies. Violence was highest or tied for highest for 1508 of them (52%). The number for profanity was somewhat lower (1300, 45%) and for sex it was a lot lower (780, 27%, ties make the totals exceed 100%).
Okay, enough from me. What thoughts do you have about MPAA movie ratings and their relationship with types of movie content? Or please let me know if there are other questions you would like me to try to answer with this fun little data set. (Or point out errors in my analyses if you like.)
1. Correlations range from -1 to +1. Positive values indicate that as values of one variable go up, values of the other variable go up too. Larger absolute values indicate stronger relationships.
5. Here’s the calculation: The predicted odds for a 0-0-0 movie of being rated PG are 0.34:1. Odds ratios are multiplicative, so this is multiplied by the odds ratio for each of the KiM scores, each raised to the power of the score (in other words, just repeating the multiplication that number of times). So 0.34:1 x 2.030 (sex) x 1.070 (violence) x 3.511 (profanity) = 1.19. (Recall that raising a number to the 0th power yields 1, so the odds ratios for sex and violence have no effect in this calculation.)
- 4 December 2008