Let’s say you want to buy a new car. Now, you aren’t a car expert, but you have a general idea of features you want in your slick new whip and you can find a market-determined price for the car you want online. Every day that your new car gets you from point A to point B without violently exploding you’ll know that you made a good decision. This is what cars are like.
Drugs are not like cars. Drugs are complicated little molecules pressed into tablets that a doctor tells you to take one, two, three times per day, maybe until you die. How much do drugs cost? They cost whatever your pharmacist says they cost. Drug prices are obscured both by a lack of patient drug expertise and the complex negotiations between insurers, manufacturers, pharmacy benefit managers (PBMs) and pharmacies. Because patients cannot easily find a price for a drug it is fair to ask if they are paying too much. Fear not! The government regulates drugs, pharmacies, AND insurance companies. The government recognizes that patients do not know a lot about drugs and steps in to protect them. In fact, one of President Trump’s campaign promises was to lower the out-of-pocket costs for drugs. To that end, the Department of Health and Human Services (HHS) released American Patients First, (APF) Trump’s blueprint to lower drug prices. It’s essentially a series of hypothetical plans that could maybe lower the cost of prescriptions in the United States. The APF correctly points out that consumers asked to pay $50 vs. $10 are 4 times more likely to abandon their prescription at the pharmacy. We want patients to be able to afford their medications and to therefore be healthier. This is an important point to remember: reducing out-of-pocket costs is only useful if it increases health. Let’s see how the APF will make Americans healthier. First we need to understand the justification for this beautiful document. Why are drug prices high? Well one given reason is that the 1990s saw the release of several “blockbuster” drugs that dramatically increased pharmaceutical company revenues. However, many of these drugs lost patent protection in the mid-2000s. In order to maintain constant revenue streams, the APF posits, companies raised prices on other drugs. The Affordable Care Act (ACA) put upward pressure on drug prices in a few ways. First it increased the number of critical-need healthcare facilities that receive mandatory discounts on drugs (340B entities). It also placed taxes on branded prescription drug sales. This was implemented to shift patients and organizations away from using brand-name (read: expensive) drugs when generics are available. To pay these taxes however, drug costs had to go up. All of these justifications for high drug prices establish a pattern: if one person is paying less then the costs shift somewhere else. Someone has to pay. How does the APF plan propose to tackle high out-of-pocket costs? The strategies are presented as a four-point plan. First, it proposes that the US increase competition in pharmaceutical markets. Classic free market stuff right here. One part is a FDA regulatory change which prevents a company from blocking entry of generic competitors into the market. Seems like a straightforward good idea. The other noteworthy idea here is to change how a certain class of expensive injectable drugs, biologics, are billed. This would prevent “a race to the bottom” in biologic pricing which would make the market less attractive for generic competition. Essentially this rule could help keep biologic prices high, to make the market profitable, so there is generic competition, to lower prices. It is difficult to predict if this would work or not. The second objective is to improve government negotiation tools. This part is pretty fleshed out, with 9 different bullet points. However 8 of the 9 points relate to Medicaid or Medicare primarily helping old and/or poor people. Right now drug coverage by Medicare cannot take price into consideration when deciding whether to cover a drug. If the largest insurer in the country (the government) can start negotiating on prices, the market could shift dramatically. However someone has to pay and this may shift prices to private plans. Another goal in this section is the work with the commerce department to address the unfair disparity between drug prices in America and other countries. It is unclear how this would be achieved. The third objective is to create incentives for lower list prices. Drugs have many different prices based on who is paying on them. Companies may be incentivized to raise list prices to increase reimbursement rates since they often only receive a portion of the list price for a drug. However if the drug is not covered by a patient’s plan they could be on the hook for the inflated list price. One of the most widely criticized parts of the APF plan is to include list prices in direct-to-consumer advertising. Since most people do not pay the list price, is it even helpful to include? Probably not. The final objective is to bring down out-of-pocket costs. I thought this was the purpose of the whole document so I was surprised that it is also one of the sub-sections. Both of the proposals here target Medicare Part D, so they may have limited benefits to non-Medicare patients. One proposal is to block “gag clauses” that prevent pharmacies from telling patients when they could save money by not using insurance. While this will indeed lower out of pocket costs for certain prescriptions, the point of insurance is to spread out the costs. The inevitable side effect will be price increases in other prescriptions. The long final portion of the document is a topic by topic list of questions that need to be addressed. Who knew that healthcare was so complicated? There are some good ideas in here that need to be explored like indication-based pricing or outcomes-based contracts. Austin Frakt has a good piece on these here. My favorite question in the section is: “How and by whom should value be determined??” Yes the question in the APF includes the double question marks. This questions really gets to the philosophical crux of the healthcare problem. It should be pretty simple to solve. Here are some other quotes: “Should PBMs be obligated to act solely in the interest of the entity for whom they are managing pharmaceutical benefits?”
As of this writing, none of these policies have been implemented, but the President could instruct the FDA to begin them theoretically whenever. There are still many implications to these policies that are unknown. Each one likely has unintended consequences, as all policies do. The two critical questions we need to ask of our policy makers going forward are:
So uh good luck to us.
0 Comments
First go visit willrobotstakemyjob.com. Will you lose your job to robots? A lot of articles and think pieces recently have touted the artificial intelligence (AI) revolution as a major job killer. And it probably will be...in a few decades. One of the most commonly studied AI systems are neural networks. In this post I want to demonstrate that, although neural networks are powerful, they are still a long way away from replacing people.
Some brief background: All types of neural networks are, wait for it, composed of neurons. Similar to the neurons in our brains, these mathematical neurons are connected to each other. When we train the network, by showing it data and rating its performance, we teach it how to connect these neurons together to give us the output that we want. It's like training a dog. It does not understand the words that we are saying, but eventually it learns that if it rolls over, it gets a treat. This video goes into more depth if you are curious.[1] Conventional neural networks take a fixed input, like a 128x128 pixel picture, and produce a fixed output, like a 1 if the picture is a dog and a 0 if it is not. A recurrent neural network (RNN) works sequentially to analyze different sized inputs and produce varied outputs. For instance, RNNs can take a string of text and predict what the next letter should be, given what letters preceded it. What is important to know about them is that they work sequentially and that gives them POWER. I originally heard about these powerful RNNs from a Computerphile video where they trained a neural network to write YouTube comments (even YouTube trolls will be supplanted by AI). The video directed me to Andrej Karpathy’s “The Unreasonable Effectiveness of Recurrent Neural Networks”. Karpathy is the director of AI at TESLA and STILL describes RNNs as magical. That is how great they are. His article was so inspiring that I wanted to train my very own RNN. Luckily for me, Karpathy had already published a RNN character-level language model char-rnn [2]. Essentially it takes a sequence of text and trains a computer program to predict what character comes next. With most of the work setting up the RNN system done, the only decision left was to decide what to train the model on. Karpathy's examples included Shakespeare, War and Peace, and Linux code. I wanted to try something unique obviously and because I'm a huge fucking nerd I choose to scrape the Star Trek: Deep Space 9 plot summaries and quotable quotes from the Star Trek wiki [3]. Ideally, the network would train on this corpus of text and generate interesting or funny plot mashups. However, after training the network on the DS9 plot summaries and quotes, I realized that there was not enough text to train the network well. The output was not very coherent. The only logical thing to do was to gather more Star Trek related content, namely, the text from The Next Generation and Voyager episode wiki pages. After gathering the new text, the training data set had a more respectable 1,310,922 words (still small by machine learning standards). [Technical paragraph] The network itself was a Long-Short Term Memory (LSTM) network (a type of RNN). The network had 2 layers, each with 128 hidden neurons (these are all the default settings by the way). It took ~24 hours to train the RNN. Normally neural network scientists use specialized high-speed servers. I used my Surface Pro 3. My Surface was not happy about it. "Show us the results!" Fine. Here is some of the generated text: "She says that they are on the station, but Seven asks what she put a protection that they do anything has thought they managen to the Ompjoran and Sisko reports to Janeway that he believes that the attack when a female day and agrees to a starship reason. But Sisko does not care about a planet, and Data are all as bad computer and the captain sounds version is in suspicions. But she sees an office in her advancement by several situation but the enemy realizes he had been redued and then the Borg has to kill him and they will be consoled" Not exactly Infinite Jest, but almost all of those are real (Star Trek) words. Almost like a Star Trek mashup fever dream. Who are the Ompjoran? Why doesn't Sisko care about a planet? What is a starship reason? It all seems silly but what is amazing about this output is that the RNN had to learn the English language completely from scratch. It learned commas, periods, capitalization and that the Borg are murderous space aliens. One variable that I can control is the "temperature" of the network output. This tells the RNN how much freedom it has in choosing the next character in a sequence. A high temperature allows for more variability in the results. A temperature close to zero always chooses the most likely next character. This leads to a boring infinite loop: "the ship is a security officer and the ship is a security officer and the ship is a security officer and the ship is a security officer and the ship is a security officer" Here is an example of some high-temperature shenanigans. Notice how, like a moody teen, it does whatever it wants: "It is hoar blagk agable,. Captainck, yeve things he has O'Brien what she could soon be EMH 3 vitall I "Talarias)" If you want to read more RNN generated output, I have a 15,000 character document here. At one point it says "I want to die" which is pretty ominous. Seriously check it out. For future reference, Star Trek may actually be a bad training set. Many of the words in the show are made up so the network can be justified in also making up words. Hopefully it is clear to all the Star Trek writers reading this that your jobs are safe from artificial intelligence. For the rest of you, your jobs are probably pretty safe too. For now. William Riker [a human]: "You're a wise man, my friend." Data [an android]: "Not yet, sir. But with your help, I am learning." [1] If you are really curious about neural networks this free online book is a good resource. [2] I actually used a TensorFlow Python implementation of Karpathy’s char-rnn code found here. [3] You can find my code and input files on GitHub here. Bill James’s Pythagorean expectation is a simple equation that takes the points scored by a team and the points scored against that team over a season and predicts their win percentage. Originally developed for baseball, it was adapted in the early 2000s for football, the other football, basketball, and hockey. In the interest of science I applied the same equation to our intramural Ultimate Frisbee team “The Jeff Shaw Experience”:
\[ Win\% = \frac{(Points For)^2}{(Points For)^2 + (Points Against)^2} \] Our Ultimate Frisbee Team’s expected win percentage based on this formula is 36.9%. Over a four game “season” this roughly translates to 1.5 wins. This seems like a significant departure from our actual 0.500 record. Of course, it’s impossible to win 0.5 games, so the only possibilities are winning 1 or 2 games (or 0 or 3 or 4). Still, there is something that we can learn about our team by our over performance. When we lose a game, we lose by a lot, but when we win, it is often close. As our team’s example makes obvious, a longer season would allow for better predictions. With enough games we could even set up an ELO system to predict the winners of individual games (like FiveThirtyEight does for seemingly every sport). This also assumes 2 is the proper Pythagorean exponent for Ultimate Frisbee and this league, but that is a topic that is WAYYY too big for this blog. Hopefully our first playoff game will give us a much needed data point to further refine our expected wins. Hopefully our expected wins go up. Remember Super Bowl LI you guys? It happened, at minimum, five days ago and of course Tom Brady won what was actually one of the best Super Bowls in recent memory. Football, however, is only one half of the Super Bowl Sunday coin. The other half are the 60 second celebrations of capitalism: the Super Bowl Commercials. Everyone has a list of favorites. Forbes has a list. Cracked has a video. But it is no longer politically correct in this Great country to hand out participation trophies, someone needs to decide who actually won the Advertisement Game. To tackle (AHAHA) this question I turned to the infinite online data repository, Google Trends, which tracks online search traffic. Using a list of commercials compiled during the game (AKA I got zero bathroom breaks) I downloaded the relative search volume in the United States for each company/product relative to the first commercial I saw for Google Home. [Author’s note: Only commercials shown in Nebraska, before the 4th quarter when my stream was cut, are included]. Here’s an example of what that looked like: !The search traffic for a product instantly increased when a commercial was shown! You can see exactly in which hour a commercial was shown based on the traffic spike. Using the traffic spike as ground zero, I added up search traffic 24 hours prior to and after the commercial to see if the ad significantly increased the public’s interest in the product. Below is a plot of each commercial, with the percent of search traffic after the commercial on the vertical axis and the highest peak search volume on the horizontal. If you look closely you will see that some of them are labeled. If a point is below the dotted line the product had less search traffic after the commercial than before (not good). On average 86% of products had more traffic after their Super Bowl ad than before it. But there are no participation trophies in the world of marketing and the clear winner is 84 Lumber. Damn. They are really in a league of their own (another sports reference!). Almost no one was searching for them before the Super Bowl but oh boy was everyone searching for them afterwards. They used the ole only-show-half-of-a-commercial trick where you need to see what happens next but can only do that by going to their website. Turns out its a construction supplies company
Pizza Hut had a pretty large spike during their commercial, but it actually was not their largest search volume of the night. Turns out most people are searching for pizza BEFORE the Super Bowl. Stranger Things 2 also drew a lot of searches for obvious reason. We all love making small children face existential Lovecraftian horrors. Other people loved the tightly-clad white knight Mr. Clean and his sensual mopping moves. The Fate of the Furious commercial drew lots of searches, most likely of people trying to decipher WTF the plot is about. Finally there was the lovable Avocados from Mexico commercial. No one was searching for Avocados from Mexico before the Super Bowl, but now, like, a couple of people are searching for them. Win. So congratulations 84 Lumber on your victory in the Advertisement Game. I’m sure this will set a dangerous precedent for the half-ads in Super Bowl LII. Absentee voting has already begun in Nebraska. And it turns out there are more names on the ballot than just the presidential candidates. If you live in Nebraska’s 2nd congressional district then you also get to vote for your representative to the US House! Nate Silver’s FiveThirtyEight polls-only forecast has the district as a dead heat between presidential candidates Hillary Clinton and Donald Trump which could translate to a contested race for the House. Below I have some SparkNotes™ from the congressional debate so all of us in NE02 can get informed together. But first some formalities: Brad Ashford was elected to represent the NE02 in 2014, the first Democrat to hold the seat since 1995. Before that he served in Nebraska Unicameral (District 20) from 1987-1994 and from 2006-2015. You can read more about Brad Ashford at Ballotpedia or on his campaign website. Don Bacon is a retired US Air Force brigadier general from Papillion. He is currently an assistant professor in leadership at Bellevue University. You can read more about Don Bacon at Ballotpedia or on his campaign website. You can watch the debate and read the transcript on CSPAN here. Some questions below only have answers from one candidate; those are questions that they asked each other (how cute). The notes get longer as the debate progresses as the candidates begin having more of an open dialogue. Some final thoughts: Mike’l Severe and Craig Nigrelli did an excellent job of moderating this (amazingly) civilized debate. Don clearly had some talking points that he wanted to squeeze in that took him off topic on occasion. Brad was very polished at the beginning of the debate, but while he maintained his substance, he lost some of that polish later on. Brad also thanked Don for his service on multiple occasions while Don attacked Brad as a career politician while he himself was an outsider. Obviously this is not comprehensive and I would encourage you to look further into these candidates, but if you do not have time I hope this helped to inform your decision. Or it did not and you are still going to vote along party lines. That’s cool too just don’t forget to register to vote!
Online voter registration in NE ends at 5:00pm on October 21st. You can register here. It’s possible to find play-by-play win probability graphs for every NFL game, but that does not tell me much about how the game itself was played. Additionally, I only sporadically have time to actually WATCH a game so using play-by-play data, R, and Inkscape I threw together this visualization of every play in this past Sunday’s game between the Kansas City Chiefs and New Orleans Saints. Why isn’t this done more often?
Podcasts are a big thing right now. They are perfect for commutes, washing dishes, long walks on the beach, whatever. Podcasts are a huge part of my day now. Serial (season 1 at least) changed the podcasting landscape and now they are everywhere. There are so many great choices, maybe too many, are we in a podcast bubble? Who cares. I get as many podcasts as I want, all for free.
On a recent episode of Question of the Day (a podcast of course) the hosts were discussing the future of podcasting. One of the co-hosts, James Altucher, posited that “it is worthwhile to do a podcast or to do an oral history...and the equipment is there.” He goes on to talk about how he records podcasts on his iPhone just for fun and uploads them “wherever.” This topic reemerged in a later episode on how to be an interesting person. The key, they agreed, was to ask interesting questions. Well I have an iPhone and I want to be an interesting person. So i looked into what it takes to make a podcast and what I found was alarming. It is so simple. Contributing to the podcast glut I made a podcast and in an effort to make more interesting people I wrote this guide. First a disclaimer: this method actually does cost money. You need to have a cell phone and a computer, but since this is 2016 those hopefully are not insurmountable obstacles. I will be using my iPhone as an example but the process should be generalizable to Android phones etc. First you need to record the audio for your podcast. iPhones come pre-programmed with a voice memo app. Boom. Recording software. You’re basically halfway there. So find someone you want to interview or just write up a script and record it right there on your phone. Next you need to transfer your voice memos to your computer which can be done through iTunes or sent from the app. If you recorded it perfectly the first time and don’t want to add music or effects you can skip this paragraph. For most of you though you will want to filter out some of the noise or splice together various snippets of audio. Audacity is a free, open-source audio editing program that does all of those things. Mac folks could also use GarageBand. Unlike some open-source software (*cough* Gephi *cough*) Audacity is very stable and user-friendly. You may need to download some plugins to import/export certain file extensions, but the program will forward you to the appropriate websites. Audacity has a great tutorial on mixing narration with background music so start there. If you want to try other effects their help wiki is...uh...helpful. I used the noise reduction and compressor effects first to level everything out and then the envelope tool to alter narration and music volume levels. Obviously you should listen to your podcast all the way through before exporting. I chopped out extraneous “uhs” and “ums” as well as any loud breaths. Be careful though because editing out too much can make the interview sound unnatural. After you export your file as an MP3 you need to get it from your computer to the World Wide Web. iTunes does not host the podcast, but rather provides a distribution platform for audio files hosted elsewhere. Audacity has some recommendations for where and how to upload your file on their podcasting tutorial, but I chose a different route and used SoundCloud. You can also choose to host your file for free on Google Drive or WordPress. Soundcloud’s useful creator guide walks through how to use their service and how to get your podcast to iTunes. (NB Make sure the profile picture on your SoundCloud account/podcast files is at least 1400x1400 or iTunes will reject your podcast.) Once your podcast is uploaded to SoundCloud, go to the content tab on the settings page and copy the link for your RSS feed. The last step is submitting your podcast to iTunes, the preeminent podcast repository. iTunes has a great walkthrough regarding the process. You basically just click “Submit a Podcast” from the podcast page of the iTunes store in iTunes, login with your Apple ID, and paste the link to your RSS feed from SoundCloud. Click “Verify” and your podcast is submitted. It may take up to 1 or 2 days for your podcast to appear on the store. Once your podcast is up, you and your friends can download it to your phones, subscribe, anything that you can do with a “real” podcast, because your podcast IS a real podcast. How easy was that? I made my first episode in a day and people were downloading it within 24 hours. For my first podcast I chose to interview my dad about his life and cut the interview into four different “episodes.” I am still not sure if he counts as an interesting person, but it was fun for both of us and I got to practice asking questions. Maybe the best way to become an interesting person is just to tell others that you have a podcast. You can find the podcast I made, “Papa Cam”, here or search for it in the iTunes store. The Department of Pharmacology at Creighton University School of Medicine is small, but mighty. There are only 10 professors or principal investigators (PIs) in the department, but this small size has its advantages. Or at least that is what we tell ourselves. A recent paper in Nature argued that bigger is not always better when it comes to labs and we are putting that to the test. Ideally with a smaller faculty, there would be more collaboration. Everyone knows what everyone else is doing, more or less, so they can more efficiently leverage the various expertise found throughout the department. To measure how interconnected the pharmacology department was I created a network analysis visualization based on who published with whom. Using NCBI’s FLink tool I downloaded a list of the publications in the PubMed database for each PI in the pharmacology department at CU. A quick script in R formatted the authors and created a two-column “edge list” for each author, basically a list of every connection. This was imported into the free, open-sourced network analysis program Gephi which crunched the numbers and produced a stunning map of the connections in the pharmacology dept: Gephi automatically determines similar clusters (seen as different colors) which are unsurprisingly centered on the various PIs in the department since those are the publications I was looking at. Dr. Murray, the department chair, has the most connections, also known as the highest degree, at 292, followed by Dr. Abel. Drs. Dravid and Scofield are ranked 2nd and 3rd respectively for betweenness centrality, after Dr. Murray. They are the gatekeepers that connect Drs. Abel, Bockman, and Tu to Dr. Murray. Each point’s size is proportional to its eigenvalue centrality, similar to Google’s Pagerank metric of importance.
I was a bit surprised at how disperse the department was. 60% of the PIs could be connected, and many have strong relationships. However the rest are floating on their own islands. Dr. Oldenburg is relatively new so this is not surprising. The Simeones (who are married) are closely connected. Also unsurprising. This was a quick and dirty analysis and a few of the finer points slipped through the cracks. Some of the names are common in PubMed (especially Tu). so I did my best to filter what was there and only look at publications affiliated with Creighton. Unfortunately this filters out publications from other institutions by the same author. Also not everyone is attributed the same way on every manuscript. This is especially true for Drs. KA Simeone and Gelineau-Van Waes who have published under different last names, but also because sometimes a middle name is given and sometimes it is omitted. I tried my best to standardize the spellings for each PI, but with over 700 nodes I could not double check every author to ensure there were not duplicates elsewhere. If more than one PI shows up on a paper, that paper may show up under both searches. This should not increase the number of edges, but would affect the “strength” of those connections. The connections are about what I had imagined. The brain people are on one side, everyone else is on the other. Expanding the search to include the papers from coauthors outside of the pharmacology department might discover more interesting connections. Just for fun I went ahead and pulled the data for every paper on PubMed with a Creighton affiliation. I could not even find my department on the visualization without searching for it. It is massive. The breadth of Creighton’s interconnected-ness forces me to marvel at how vast the community of scientists must truly be. So many people working to improve the body of knowledge of the human race. We are really just small bacteria in a very large petri dish. Image source: NY Times
Food stamps are mysterious. They are kind of like cobras. I have never seen one up close nor do I want to. Are they actually stamps? Like mail stamps? No idea. But just because cobras are not a part of my daily life does not mean that they should be ignored. Food Stamps were utilized by over 45 million Americans in 2016 totaling $75 billion (less than 2% of the federal budget). That is not insignificant. So let’s check our privilege and become informed voters by (briefly) diving into the world of Food Stamps. First, the term “Food Stamps” is passé. The government renamed it the Supplemental Nutrition Assistance Program or SNAP in 2008, though states are free to call it whatever they want (a small victory for states’ rights). EBT is another term that occasionally appears in grocery store windows and that I surmised was loosely associated with food stamps. EBT, or Electronic Benefits Transfer, is essentially synonymous with SNAP. An EBT card has funds transferred to it at the beginning of every month which can then be used for SNAP purchases. So the “stamps” are not literal stamps. Nor were they ever really what I would consider stamps, but rather funny-colored tiny bills (see above). A person with an EBT card loaded with SNAP $$$ can purchase just about anything at a grocery store with a nutrition label. It is probably easier to list what you cannot purchase with food stamps than what you can:
Applicants have to meet certain income tests to be eligible for SNAP. They must have a net monthly income below the federal poverty level. Additionally some states have asset requirements that limit the amount of savings or property a recipient can own. Citizens can be considered categorically eligible if they meet the requirements for other federal programs. Several deductions factor into the calculations for benefits, including excessive housing costs. If an applicant spends more than 50% of their income on rent, anything above 50% can be deducted from their income for SNAP calculations. Certain aged and disabled populations also have lower restrictions on benefits from SNAP. Applying for SNAP is not easy and the application varies between states. Iowa for example has a 19 page form that looks way more complicated than a 1040 tax form. I did not even want to read it much less attempt to fill it out. The rigor in the application process is meant to curtail fraud but it also places a burden on the family receiving the benefits and increases the administrative costs for the case workers who have to review the forms. Benefits are calculated assuming a household spends 30% of its budget on food. So the difference between 30% of the net income and the maximum allowed federal benefits based on family size is the amount received. For a family of 4 the maximum benefits are $649 which is about $6 less than the projected cost of the TFP for a family with two kids aged 6-8 and 9-11. The deficit is more pronounced for a family of two adults. This emphasizes the “supplemental” part in SNAP’s name. Even purchasing scant rations based on the TFP does not guarantee an adequate diet. Exacerbating this problem is state-to-state variability in food prices. While the federal maximum benefits are fixed in the contiguous 48 states, food prices in Connecticut can be over 30% higher than the national average or 11% lower in Texas. SNAP does have some economic upside. SNAP spending by the government has a multiplier effect. For every $1 spent on SNAP the US GDP increases by $1.79. SNAP also decreases hospitalization costs and improves school attendance for children. SNAP has its benefits and drawbacks but for over 10% of Americans it is a necessity. If you want to learn more about SNAP or to try and live the SNAP life at home check out the “Food Stamped” documentary website for details. To find more specific statistics for SNAP in your state check out the interactive map at the Center on Budget and Policy Priorities. To find out more information about cobras click here. I don’t have cable. So I did not get the chance to watch the Grammys this year. I was, however, happy to hear that Taylor Swift won the Grammy for Album of the Year for 1989 (since I recently wrote a post about how great she is). When I was writing the aforementioned post I did notice that she was nominated, but I felt pretty confident the National Academy of Recording Arts and Sciences would give it to Kendrick Lamar’s To Pimp a Butterfly. This is Swift’s second Grammy for Album of the year (she also won for Fearless as we all know). Since the data have already been scraped from the Billboard Hot 100, I might as well get some mileage out of them. For each week since November of 2014 (around when 1989 and To Pimp a Butterfly were released) I assigned any song by any of the five artists nominated for Album of the Year a point value from 1 to 100 based on its position in the Hot 100. Songs ranked number 1 were given 100 points, and songs ranked 100 were given 1 point et cetera. Then for each artist I added up the point values for each week the results of which you can see below: Notice someone missing from this visual? None of the songs from the Alabama Shakes’ album Sound & Color made it to the Hot 100. This is despite the fact that their album was on the Billboard 200 for album sales for 26 weeks, peaking at number 1. Chris Stapleton has a little purple blip around December, seven months after Traveller was released. Kendrick makes it on here and there, but the graph is clearly dominated by Taylor Swift and The Weeknd. The Weeknd has by far the highest peaks, but Taylor proves her popularity with the largest total area under the curve, 13,676 “points” vs. The Weeknd’s 11,156 “points”. TSwift also has the highest average per week, though not by much.
Fun fact: Swift’s “Bad Blood” was minimally successful until she added some bars by Kendrick Lamar...which went on to win the Grammy for Best Music Video. Does that say more about Taylor or Kendrick? We can debate whether song popularity should be the metric by which we measure the value of an album. Obviously a lot of people thought Sound & Color was a world-class album despite its absence from the Hot 100. In fact the National Academy of Recording Arts and Sciences insists that Album of the Year is to “honor artistic achievement, technical proficiency and overall excellence in the recording industry, without regard to album sales or chart position.” However, of those nominated this year, they did pick the one with the best album sales and chart position. |
Archives
January 2022
Categories
All
|