Friday check-in . . .

. . . OK, I’m a day late from my promised posting time. I ran into some data problems that I had to work through – details are below.

First, last week’s prediction. The stats were pointing to around 4,033 cases by 10Apr2020. The actual reported cases from the Johns Hopkins site was 3962 – a difference of only 71 cases. We’ll have to wait until next week though to see if this was a phenomenal bit of luck or whether the simple model I’m using is matching reality.

The problem I ran into with the data started on Wednesday. The day-to-day variation in the reported cases was wildly erratic. The growth factor is calculated directly from the daily cases and I think I’ve made my point about the sensitivity of predictions based on the growth factor.

I didn’t want to do anything too radical to the data otherwise it would appear that I was manipulating the numbers to make my own point. After a little research I decided to apply a three day running average on the new cases numbers. This is a common technique used to smooth financial and scientific data. If you have the following set of data:

DAY1234567
CASES804011010010019060

Day 2 is (80+40+110)/3, day 3 is (40+110+100)/3, etc. This gives you the following smoothed data

DAY1234567
CASES8076.683.3103.3130116.660

Day 1 will always be 80 and day 7 will become a smoothed value if and when data for day 8 is collected.

Applying this technique to the Johns Hopkins data gives the following graph:

It’s hard to do a direct comparison with the new cases projection from last week because it wasn’t using smoothed data BUT from last week’s graph one would expect around 450-500 new cases to be reported today. The smoothed data predicts just over 300 which is much closer to the reported 287. While this looks promising I’ll be checking this as time progresses and will report the results next Friday.

Armed now with smoothed data what do the growth factors look like? Here’s a plot of the growth factors over time from the original and smoothed data:

and here is this week’s plot of accumulated cases and a projection for Friday the 17th of April.

A significant note is that over the last three projections the growth factors have been 1.16, 1.10, and 1.04 – a definite trend down. This is good news as it indicates that social distancing appears to be working and every incremental decrease in the growth factor spreads the number of cases out over time so that our hospitals don’t become saturated.

Until next Friday . . . be safe, be healthy!!

Here we are a week later . . .

. . . let’s check to see if what was conjectured last Friday was close to correct. As a recap, here’s last Friday’s graph:

predicting 2,152 cases by the 34th day after 01Mar or 03Apr. Today’s graph with weekly projection looks like:

The last data point is 2,041 cases. This varies from 2,152 by only 5.2%. But, past performance doesn’t guarantee future results so we’ll have to see if next Friday North Carolina has a reported 4033 +/- 208 cases.

You may notice that the blue line showing the number of new cases is missing from the second graph. The y-axis scale became so large as to make that line a mere blue squiggle along the 0 grid line. I have broken that data into a separate graph showing the daily number of reported cases with a non-linear least squares fit and seven day projection. Today it looks like:

For me this is the Scary Graph. Down around day 20, 50 cases were getting reported each day. Ten days later this jumps to around 150. Ten days after that (the end of next week), if the projection holds, we’ll be seeing 500 reported cases a day. This is not a good trend and indicates that we haven’t entered into the middle part of the infectious spread where the number of new cases per day is relatively constant.

An interesting feature that jumps out at me are the “blips” in the data. It’s not obvious below day 15 but there’s a hint of it. There’s a jump up then a plunge the next day followed by three days of smoothly increasing data.

Will we now get three smoothly increasing days close to the projected line as we have in the previous two cycles? If the blips persist what’s causing them? [My bet is some of the reporting entities are releasing their numbers in batches and some are releasing numbers daily.]

Stay tuned (click the follow link if you’re interested). If something striking happens I’ll post it but look for the next post to drop next Friday.

Please excuse my obsession

In the previous posts I made something of a deal about the number called the growth factor. I’d like to explain why it’s such a deal. Consider the following graph:

It looks like “Mr. Toad’s Wild Ride” and is generated from the data I’ve collected from the Johns Hopkins COVID-19 website. I hope to show here that VERY small changes in it’s value can lead to major changes in the outcomes. First, let’s look at the actual numbers:

6, 0.1666667, 7, 0.4285714, 2.333333, 1, 1.142857, 3, 1.083333, 1.653846, 1.255814, 1.555556, 0.4047619, 3.058824, 0.9807692, 1.107843, 1.168142, 1.787879, 0.2923729, 2.188406, 1.099338, 1.096386

The 6 and 7 look a lot like outliers so running R’s boxplot routine on the data gives the following plot:

Boxplot, using Tukey’s Method, does indicate that 6 and 7 are too far away from the mean of the data so we’ll eliminate them. That leaves:

0.1666667, 0.4285714, 2.333333, 1, 1.142857, 3, 1.083333, 1.653846, 1.255814, 1.555556, 0.4047619, 3.058824, 0.9807692, 1.107843, 1.168142, 1.787879, 0.2923729, 2.188406, 1.099338, 1.096386

Additionally, growth factors 2 or greater lead to extremely high infection counts (the entire world would be infected in under 33 days) that are not reflected in reality so eliminate those data points.

0.1666667, 0.4285714, 1, 1.142857, 1.083333, 1.653846, 1.255814, 1.555556, 0.4047619, 0.9807692, 1.107843, 1.168142, 1.787879, 0.2923729, 1.099338, 1.096386

There are no outliers in the remaining data:

so, I’m going to use the last seven days growth factors to predict the next seven days number of infections. For comparison let’s check the projections using the raw growth rates, the rates without the outliers plus rates greater than 2, and finally the filtered seven day averaged.

Obviously out of line with observed reality. The early data was very noisy.
The outliers and values >2.0 are dropped but several very LOW values from early in the data collection appear to pull the growth factor abnormally low.
Using the seven day running average growth factor produces results more congruent with observed reality.

I will be using the running average in future predictions. Let’s see how things look on Friday . . . film at 11:00.

Dying for the Dow . . .

In my opinion, not sheltering is not an option.

If you read my previous post about COVID-19 you know a key number is the growth factor. What’s not obvious is HOW sensitive the outbreak is to this seemingly little number. So, with this in mind I thought I would recast the math in terms which some folks could relate.

If you have a savings account (not a given here in the U.S.) you know that you receive a nominal interest rate on your savings. Let’s say your bank offers a 16% return on your savings (yes, a complete fantasy – I’m earning 0.70%). You start a new account with one dollar as an opening deposit and let it sit for a year. At the end of the year you’ll have . . . yes, $1.16.

BUT, this is an annual rate whose accrued interest is spread over a year. The growth factor is a DAILY rate generated from the previous day’s measurements. So, if you open an account with one dollar and apply a 16% daily rate for one year you’ll have:

$336,640,200,000,000,000,000,000.00

Ok, you argue that’s for an entire year. We’re talking about a pandemic that just lasts a few months. Applying the new case formula in the previous post with a growth factor of 1.16 for three months you’ll get:

$632,730.00

Certainly not chump change. Of course math is agnostic so whether it’s dollars or infections the number is the number. Currently there are 649,904 COVID-19 cases worldwide. Given the virus has been very active since 01Jan2020 (about three months) it looks like our growth factor of 16% is pretty good.

As I type this there are currently 115,547 confirmed cases in the U.S. In 28 days at a growth factor of 1.16 there will be:

7,371,950

cases. NOW do you see why you should take this seriously? NOW do you see why it’s important to stay home? You’ll never be able to benefit from a rebounding economy if you’re dead.

(By the way, I didn’t just pull 1.16 from some dark hole. It’s the growth factor for North Carolina as of 27Mar2020 at 10:30.)

Three musicians are sitting in a bar . . .

. . . in Austria. At 3:00AM. The topic turns to “Funny titles for music”. We aren’t talking about the usual titles like “If You Won’t Leave Me, I’ll Find Somebody Who Will” by Warren Zevon, “Shoop Shoop Diddy Wop Cumma Cumma Wang Dang” by Monte Video and the Casettes, or “Satan Gave Me a Taco” by Beck. We are throwing out titles that could put you into a major existential crisis. Unfortunately, only one stuck with me, “The Negative Space of Speed” that I subtitled “A Musical Null Set“.

Returning from Austria I felt that this title/subtitle needed to actually exist as a piece of music. What I soon realized is there is no possible combination of pitches, rhythms, dynamics, and tempi that could adequately describe this piece. The title implies “not music” or the absence of music. From this realization I managed to score the work for a full wind symphony (minus the piano and harp).

The beauty of the work is its flexibility. It requires NO rehearsal, can be played with ANY combination of wind and percussion instruments, and the musician’s skill level is completely irrelevant. It can also be used to pad an otherwise sparse concert repertoire.

Let’s say that you’ve programmed your concert for six works. But one of the works will have to be cut. There are any number of reasons this could happen:

  • you discovered that you don’t have performance rights to the work
  • the guest soloist turns out to be a serial killer and is arrested the day before the performance
  • the Fire Marshall rules that you can’t play your pyrophone in the concert hall
  • or, the band just can’t play the music.

No problem. Pull the work and replace it with “The Negative Space of Speed“. Your program still has six works and it appears you’re playing a cool Avant-garde 21st century work. Which you are . . . at least conceptually.

Please note that this is not an homage to (or blatant ripoff of) 4’33”, John Cage’s work that explores the idea that segments of time can be filled with structured or ambient sound (noise). Both can be considered music. Additionally, Cage was interested in the behavior of the audience and how their actions contribute to the music.

The Negative Space of Speed” plays with the idea of zero duration. Without duration there can be no music – structured or otherwise. BUT oddly enough, there can still be performance. The conductor turns the page to the score, cues the ensemble, and the audience eagerly anticipates the downbeat of this new piece. What they don’t realize is they missed the piece entirely and the downbeat is for the next work on the list (probably “First Suite in E-flat for Military Band” by Gustav Holst). While this may not cause a near riot as did the premiere of Stravinsky’sThe Rite of Spring” it hopefully will fuel a lively audience discussion of what constitutes music.

And yes, you too can add “The Negative Space of Speed” to your repertoire. The perpetrators of this work have decided to make it a free download!! Just click here.

 

 

The Mahler Hammer at Eastman

The Eastman Philharmonia under Neil Varon’s baton performed Mahler’s 6th Symphony on 14Nov2018. In need of a Mahler Hammer (on short notice) Neil contacted me about either renting, buying, or having one made. Since the hammer detailed on this site was available from the Duke University Wind Symphony I worked with the Eastman School staff to have it shipped to them in time for their last rehearsals and performance.

IMG_1343

Good striking form!

hammer

Mahler Hammers can also be used for “percussive maintenance”!

 

 

 

Just in case you thought . . .

. . . my previous post was useless, here’s something to consider.

You go to a nice restaurant and order a $40 glass of Domaine de Montille Corton Clos du Roi Grand. The sommelier pours you a bit, you swirl, smell, sip and nod your approval. The sommelier pours your wine and leaves you looking at the glass wondering if you’ve been cheated.

The shape of the glass is determined by the function sin(x). Here’s the plot (x=0.0 to 1.5 radians) from the R statistical system. It’s been rotated 90 degrees counter clockwise to make visualization easier and is marked at x=0.75 (half the height) and x=1.111871 (half-full by volume).

winehalffull

Using this as a template draw a cross-section of a nice wine glass and fill it to half the height (fig. 1) and half the volume (fig. 2) like so:

figures

Being a beer kind of guy, if I’m going to spend $40 on a glass of wine, I want a full glass of wine. But, as you can see what appears to be a fairly full glass of wine is still only half full by volume. To prove this we can use the program from the previous post to calculate the volumes bounded by a = 0.0 and b = 1.111871 and a = 1.111871 and b=1.5. Here are the results:

[Walters-iMac:~/desktop] wemrt% python halffull.py
lower_bound = 0.000000
upper_bound = 1.500000
height      = 1.500000
volume      = 2.245359
half-volume = 1.122680 <- This should be the volume of the
                          top half of the glass.

Tolerance : 0.000001
Iterations: 20
lower_bound = 0.000000
upper_bound = 1.111871
height      = 1.111871
volume      = 1.122679

Then using the upper_bound for the lower_bound and 1.5 for the upper bound:

[Walters-iMac:~/desktop] wemrt% python halffull.py
lower_bound = 1.111871
upper_bound = 1.500000
height      = 0.388129
volume      = 1.122676 <- Off by 4 X 10^-6 rounding error.
half-volume = 0.561338

Tolerance : 0.000001
Iterations: 19
lower_bound = 1.111871
upper_bound = 1.315996
height      = 0.204125
volume      = 0.561337

So, yes you we’re being shorted by quite a lot of wine. Call the sommelier back, show him this post and tell him that for $40 he can bloody well give you half a glass of wine.

 

NCSE/UNC Spectrum Concert

The North Carolina Saxophone Ensemble and the UNC Saxophone Studio will perform on 11Apr2014 in the Kenan Music building rehearsal hall. The program notes for the concert will be projected on a large screen rather than printed, saving paper and allowing people to review the program both before and after the performance. To see the program notes click here.

Here are short descriptions of each piece.

Four5 – The fifth in a series of pieces for four players by John Cage. Cage wrote the “Number Pieces” later in his career. Click here for more information.

Melodies for Saxophone – Thirteen melodies written by Philip Glass for Jean Genet’s play “Prisoner Of Love” adapted by Joanne Akalaitis for the New York Theater Workshop.

The Difficulties – Electronica by Mark Engebretson and poetry by Brian Lampkin. For this performance a jazz baritone saxophone improvisation triggers electronic sounds to compliment the reading of Lampkin’s poem “The Difficulties”.

Far Away – Takatsugu Muramatsu is most noted for his work in film and television but “Far Away” was originally written for the Libera boys choir.

Last Tango in Bayreuth – Peter Schickele originally played this on piano as something of a party trick, eventually completing it as a quartet for four bassoons. It’s a tongue-in cheek tribute to Richard Wagner based on the “Tristan” chord from Tristan ind Isolde and a theme from “Overture to Act III” of Loehengrin.

Shetland Sequence – An arrangement of Shetland jigs by the British saxophonist Jan Steele. The jigs included are “Jack broke da prison door”, “Donald Blue”, “Sleep sound ida morning'”, “Lassies trust in providence”, and “Bonnie Isle o’Whaljay”.

Ecstatic Fanfare – An arrangement of the brass fanfare from the first movement of Steven Bryant’s “Ecstatic Waters” for wind ensemble.

Smiles and Chuckles / Beautiful Ohio Blues – These two pieces date from the early 20th century and were written for the Columbia Saxophone Sextet and The Six Brown Brothers. The arranger, David Lovrien, transcribed the pieces from recordings made on wax cylinders making these truly authentic saxophone pieces.

Capriol Suite – A collection of six dances with a Renaissance flavor written in 1926 by Peter Warlock. Originally written as a piano duet, Warlock re-scored the work for orchestra.

Festive Overture, Opus 96 –  Dmitri Shostakovich wrote this work in three days for the 37th anniversary of the October Revolution in 1954. Stylistically it is based on Glinka‘s Russlan and Ludmilla overture written in 1842.

Trilogy – A transcription of the opening vocal section of the larger work of the same name by Keith Emerson and Greg Lake with the tenor sax taking the vocal solo and the ensemble covering the piano parts.