tag:blogger.com,1999:blog-15324198057018363862019-07-16T01:52:30.534-07:00Random ProblemsHere are solutions to some random problemstheboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.comBlogger78125tag:blogger.com,1999:blog-1532419805701836386.post-23908645595499980412019-06-29T17:15:00.000-07:002019-06-29T23:28:09.487-07:00Why Do We Let Amazon Pay No Federal Taxes?I see people mention Amazon or Netflix or whoever paying no federal tax often as if it's a terrible thing for society. Is it though?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-Z41olRmaGQ0/XRf8HT3x75I/AAAAAAAAFMQ/Rzt5NG0uRhUFrWKqw-UQPf5Ax5UShXIPACLcBGAs/s1600/1920px-Amazon_logo.svg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="482" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-Z41olRmaGQ0/XRf8HT3x75I/AAAAAAAAFMQ/Rzt5NG0uRhUFrWKqw-UQPf5Ax5UShXIPACLcBGAs/s1600/1920px-Amazon_logo.svg.png" width="40%" /></a></div><br /><br /><a name='more'></a>There are a few ways that this can happen. For example, they can carry forward losses from previous years or spend a lot on investing in the business. There's a specific way this happens that no one I've talked with seems aware of that is worth walking through here though. The situation is employee stock.<br /><br />I'll focus on one specific example here...<a href="https://www.investopedia.com/terms/r/restricted-stock-unit.asp">Restricted Stock Units (RSUs)</a>.<br /><br />Employee bonuses in tech are often in the form of RSUs. These are shares in the company that are not available to the employee until after some period of time. When those become available to the employee (i.e., they 'vest'), they count as income, so the employee pays tax and the company has their employee income expenses (which are deductible) increase.<br /><br />Let's walk through a detailed, hypothetical example.<br /><ul><li>In January 2016, a company gives an employee 100 shares of stock valued at $100 per share.</li><li>In January 2018, that stock vests and is worth $150 per share now.</li><li>In April 2019, the employee pays his 2018 taxes, finds that he's in the 32% income bracket, and pays 0.32 * $150 * 100, or $4800 in taxes.</li><li>The company's deducts $150*100, or $15,000 for 2018. At a 21% max tax rate, that's 0.21*$15000, or $3150 in lost taxes.</li><li>In total, $4800 of taxes are paid.</li></ul><div>Imagine instead that the company just paid the employee an additional $15,000 in 2018.</div><div><ul><li>Employee pays 0.32*$15,000, or $4800 in taxes.</li><li>Company deducts $15,000, so 0.21*$15,000 or $3150 in lost taxes.</li><li>In total, $4800 of taxes are paid.</li></ul><div>The only differences here are that the amount depends on stock growth, and the income and deduction are in a year other than the one in which the RSUs were given.</div><div><br /></div><div>Finally, imagine we change the laws to force the company to not deduct this payment and so the company keeps the money:</div></div><div><ul><li>Employee had no additional income, so $0 added to his tax which is $4800 in lost taxes compared with the other two situations.</li><li>Company pays 0.21*$150*100, or $3150 in taxes.</li><li>In total, $3150 of taxes are paid.</li></ul><div>Look at the total situation from those three. Current situation results in $4800 - $3150, or $1650 more in tax revenue than if the company had been the one taxed there. The general equation is (employee tax rate - corporate tax rate) * stock value. This situation is better.<br /><br />We could tax both the employee and the company, but that makes no sense. Also, if we did, companies could just shift to paying directly as salary instead of stock bonuses to avoid the double-tax and we're back to where we were but have no ability for companies to give employees stock.<br /><br />It is important to note here that this effect will be larger when the company is doing well. The equation here shows gain in tax revenue is directly proportional to both stock value and employer tax deductions. It's worth summarizing...<b>companies are able to lower their taxes more with this method when their stock value goes up, and this results in more tax revenue for the US government.</b></div></div><div><br /></div><div>In summary...a major way that fast-growing tech companies end up with low or zero federal tax burdens is through vesting RSUs that have grown in value over time. If the employee's tax bracket is higher than the company's (generally true for high-paying tech companies that do this), the overall federal tax revenue is higher than it would be if the company was responsible for the tax. This is literally a company lowering its taxes by paying its employees more which is generally thought to be a good thing.</div><div><br /></div><div><a href="https://en.wikipedia.org/wiki/Double_Irish_arrangement">There are shitty ways that companies avoid taxes</a>. Paying their employees well is not one of them. </div><div><br /></div><br />theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-16536748121512803402019-06-01T13:06:00.001-07:002019-06-04T21:05:01.382-07:00SI Notation and Custom Tick Marks in MatplotlibHow do you get SI notation and other custom tick marks in matplotlib?<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-fKMfr_2HYIs/XPLZ_0mgmdI/AAAAAAAAFB8/k1EW7i127hsFXCaeN2VW5Lm_RE9f9Ir7QCEwYBhgL/s1600/SI%2BNotation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="398" data-original-width="897" height="auto" src="https://1.bp.blogspot.com/-fKMfr_2HYIs/XPLZ_0mgmdI/AAAAAAAAFB8/k1EW7i127hsFXCaeN2VW5Lm_RE9f9Ir7QCEwYBhgL/s1600/SI%2BNotation.png" width="0%" /></a></div><a name='more'></a>If you've ever plotted something in matplotlib with large x and/or y values, you've probably seen something like the x-axis here:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-PTIvBB6PFmY/XPLaDPIdWAI/AAAAAAAAFCA/jtujxcZ9AKA7_AUFjtcbqwcpoylmIBs-ACLcBGAs/s1600/no%2Bformat.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="420" data-original-width="899" height="auto" src="https://1.bp.blogspot.com/-PTIvBB6PFmY/XPLaDPIdWAI/AAAAAAAAFCA/jtujxcZ9AKA7_AUFjtcbqwcpoylmIBs-ACLcBGAs/s1600/no%2Bformat.png" width="90%" /></a></div><br /><br />I personally really dislike that. The '1e7' off to the side is kind of hidden, and in general it would be great to be able to format the tick marks. There are some obvious ways to do it:<br /><ul><li>set your own ticks and labels manually; this requires you to handle figuring them out, padding appropriately, etc., and it's a hassle</li><li>let matplotlib set ticks and labels, and then update them; this can be very slow</li></ul><div>There's another option that is pretty handy. Setting formatters can do this for you. There's a default one for <a href="https://matplotlib.org/gallery/text_labels_and_annotations/engineering_formatter.html">engineering notation here</a>. <a href="https://matplotlib.org/gallery/ticks_and_spines/tick-formatters.html">FuncFormatter</a> lets you specify the format function for the tick labels explicitly for more control. Here's an example:</div><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #66d9ef;">def</span> <span style="color: #a6e22e;">powers_of_1000</span><span style="color: #f8f8f2;">(x,</span> <span style="color: #f8f8f2;">pos):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">x</span> <span style="color: #f92672;">==</span> <span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">:</span><br /> <span style="color: #66d9ef;">return</span> <span style="color: #f8f8f2;">x</span><br /> <span style="color: #f8f8f2;">bins</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">1000000000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.001</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.000001</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.000000001</span><span style="color: #f8f8f2;">]</span><br /> <span style="color: #f8f8f2;">abbrevs</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #e6db74;">'E12'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E9'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E6'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E3'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">''</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E-3'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E-6'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'E-9'</span><span style="color: #f8f8f2;">]</span><br /> <span style="color: #f8f8f2;">label</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">x</span><br /> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(len(bins)):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">abs(x)</span> <span style="color: #f92672;">>=</span> <span style="color: #f8f8f2;">bins[i]:</span><br /> <span style="color: #f8f8f2;">label</span> <span style="color: #f92672;">=</span> <span style="color: #e6db74;">'{1:.{0}f}'</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">format(</span><span style="color: #ae81ff;">2</span><span style="color: #f8f8f2;">,</span> <span style="color: #f8f8f2;">x</span><span style="color: #f92672;">/</span><span style="color: #f8f8f2;">bins[i])</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">abbrevs[i]</span><br /> <span style="color: #66d9ef;">break</span><br /> <br /> <span style="color: #66d9ef;">return</span> <span style="color: #f8f8f2;">label</span><br /></pre></div><div><br /></div><div>This puts all tick labels in terms of powers of 1000 (E3, E6, etc.). What does that look like in the plot?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-_C3D_mjo59M/XPLaGiyz4hI/AAAAAAAAFCE/Qlv4n0jG-p4AIN7BsVpERinUq3EiHzL9gCLcBGAs/s1600/powers%2Bof%2B1000.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="398" data-original-width="892" height="auto" src="https://1.bp.blogspot.com/-_C3D_mjo59M/XPLaGiyz4hI/AAAAAAAAFCE/Qlv4n0jG-p4AIN7BsVpERinUq3EiHzL9gCLcBGAs/s1600/powers%2Bof%2B1000.png" width="90%" /></a></div><br /><br />Pretty nice. How do you actually call that? Here's an example:<br /><br /></div><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #f8f8f2;">plt</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">figure(figsize</span><span style="color: #f92672;">=</span><span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">12</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">5</span><span style="color: #f8f8f2;">])</span><br /><span style="color: #f8f8f2;">plt</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">plot(x,</span> <span style="color: #f8f8f2;">y)</span><br /><span style="color: #f8f8f2;">plt</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">title(</span><span style="color: #e6db74;">'Powers of 1000'</span><span style="color: #f8f8f2;">)</span><br /><span style="color: #f8f8f2;">plt</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">gca()</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">xaxis</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">set_major_formatter(FuncFormatter(powers_of_1000))</span><br /></pre></div><div><br />Simple. Note that you can use this for the yaxis also by copying the last line and changing 'xaxis' to 'yaxis'.<br /><br />How do you write these functions in general? x is the value of the tick (it can be x or y axis...x doesn't mean 'x value' here). pos is the index of the tick in the list (0 is first). All you have to do is return a string that is the text you want displayed at that tick.<br /><br /></div><div>One more example...what about SI notation? This is representing 2500 as 2.5k, 10000000 as 10M, and so on. That looks almost like the powers of 1000 one:<br /><br /></div><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #66d9ef;">def</span> <span style="color: #a6e22e;">SI</span><span style="color: #f8f8f2;">(x,</span> <span style="color: #f8f8f2;">pos):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">x</span> <span style="color: #f92672;">==</span> <span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">:</span><br /> <span style="color: #66d9ef;">return</span> <span style="color: #f8f8f2;">x</span><br /> <span style="color: #f8f8f2;">bins</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">1000000000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1000.0</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.001</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.000001</span><span style="color: #f8f8f2;">,</span> <span style="color: #ae81ff;">0.000000001</span><span style="color: #f8f8f2;">]</span><br /> <span style="color: #f8f8f2;">abbrevs</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #e6db74;">'T'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'G'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'M'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'k'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">''</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'m'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'u'</span><span style="color: #f8f8f2;">,</span> <span style="color: #e6db74;">'n'</span><span style="color: #f8f8f2;">]</span><br /> <span style="color: #f8f8f2;">label</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">x</span><br /> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(len(bins)):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">abs(x)</span> <span style="color: #f92672;">>=</span> <span style="color: #f8f8f2;">bins[i]:</span><br /> <span style="color: #f8f8f2;">label</span> <span style="color: #f92672;">=</span> <span style="color: #e6db74;">'{1:.{0}f}'</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">format(</span><span style="color: #ae81ff;">2</span><span style="color: #f8f8f2;">,</span> <span style="color: #f8f8f2;">x</span><span style="color: #f92672;">/</span><span style="color: #f8f8f2;">bins[i])</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">abbrevs[i]</span><br /> <span style="color: #66d9ef;">break</span><br /> <br /> <span style="color: #66d9ef;">return</span> <span style="color: #f8f8f2;">label</span><br /></pre></div><div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-fKMfr_2HYIs/XPLZ_0mgmdI/AAAAAAAAFB8/k1EW7i127hsFXCaeN2VW5Lm_RE9f9Ir7QCEwYBhgL/s1600/SI%2BNotation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="398" data-original-width="897" height="auto" src="https://1.bp.blogspot.com/-fKMfr_2HYIs/XPLZ_0mgmdI/AAAAAAAAFB8/k1EW7i127hsFXCaeN2VW5Lm_RE9f9Ir7QCEwYBhgL/s1600/SI%2BNotation.png" width="90%" /></a></div><br /><br />Simple again. This is a really powerful feature that took me some time to find, so hopefully this makes it more obvious to anyone else looking for it. <a href="https://colab.research.google.com/drive/1H0HaDgYn7f5KRXxvc3GGMMy2X_8aTlmu">A full example can be found here</a>.<br /><br /><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-77944666095433715072019-05-25T19:06:00.001-07:002019-05-25T20:54:40.106-07:00Understanding Confidence Intervals<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/plotly.js/1.47.3/plotly.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/jstat/1.7.1/jstat.min.js"></script> Confidence intervals are confusing. I tried putting together a visualization that explains a valid interpretation of them.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-DX-UM1E_M64/XOn0SNoRIqI/AAAAAAAAFBA/claf8pa5i2gPuKNKYDba6Sc3QcczeA5PQCLcBGAs/s1600/confidence%2Binterval%2Bas%2Bmeans%2Btest.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="562" data-original-width="1600" src="https://1.bp.blogspot.com/-DX-UM1E_M64/XOn0SNoRIqI/AAAAAAAAFBA/claf8pa5i2gPuKNKYDba6Sc3QcczeA5PQCLcBGAs/s1600/confidence%2Binterval%2Bas%2Bmeans%2Btest.gif" width="0%" /></a></div><a name='more'></a>In case you aren't familiar, <a href="https://en.wikipedia.org/wiki/Confidence_interval">here's the wikipedia entry on confidence intervals</a>.<br /><br /><h4>Demonstrated interpretation</h4><div><br /></div><div><i>If you repeat the procedure N times, the percent of calculated 90% confidence intervals that contain the true population parameter would tend toward 90%.</i></div><div><i><br /></i></div><div>I attempted to visualize this one. The animation below generates 100 random numbers from a normal distribution centered at 0 (0 is the true population parameter here). It then calculates the confidence interval. It does this hundreds of times and calculates the percentage of confidence intervals that contained 0. For an 80% confidence interval, that should be ~80%. For an 85% one, that should be 85%...<br /><br />You can select the confidence interval to change it.</div><div><br /></div>Confidence Interval <select id="confidence" onchange="changed()"> <option value="80">80%</option> <option value="85">85%</option> <option selected="" value="90">90%</option> <option value="95">95%</option> <option value="99">99%</option></select><br /><div style="display: grid; grid-template-columns: 1fr 1fr;"><div id="sources" style="grid-column: 1/3; height: 275px;"></div><div id="low" style="height: 200px;"></div><div id="high" style="height: 200px;"></div></div><div id="summary" style="font-family: "verdana" , "arial" , sans-serif; font-size: 18px; text-align: center;"></div><div><br /></div><h4>Bad interpretations that this demo clears up</h4><div><ol><li>It is clear from the visualization that the X% confidence interval does not contain X% of the points.</li><li>It is clear from the visualization that the X% confidence interval is not the entire range of plausible values. The fact that some of the calculated confidence intervals do not contain 0 shows this pretty conclusively.</li></ol></div><div><br /></div><script>var lows = []; var highs = []; var i = 0; var misses = 0; var n = 100; var ciDict = { 80: 1.282, 85: 1.440, 90: 1.645, 95: 1.960, 99: 2.576 }; var target = 90; function randn_bm() { var u = 0, v = 0; while(u === 0) u = Math.random(); //Converting [0,1) to (0,1) while(v === 0) v = Math.random(); let num = Math.sqrt( -2.0 * Math.log( u ) ) * Math.cos( 2.0 * Math.PI * v ); num = num/20; // Translate to 0 -> 1 if (num > 5 || num < -5) return randn_bm(); // resample between 0 and 1 return num; } var interval = setInterval(animate, 200); function changed() { target = $('#confidence').val(); clearInterval(interval); lows = []; highs = []; i = 0; misses = 0; n = 100; interval = setInterval(animate, 200); } function animate() { let vals = []; let sum = 0; for (let j = 0; j < n; j++) { let val = randn_bm() sum += val; vals.push(val); } //calculate CI let avg = jStat.mean(vals); let CI = ciDict[target]*jStat.stdev(vals)/Math.sqrt(n); let lowCI = avg - CI; let highCI = avg + CI; if ((lowCI >= 0) || (highCI <= 0)) { misses += 1; } lows.push(lowCI); highs.push(highCI); let normal = [{ x: vals, type: 'histogram', xbins: { end: 5, size: .025, start: -5 } }]; let shapes = [ { type: 'line', x0: lowCI, y0: 0, x1: lowCI, y1: 250, line: { color: 'rgb(0,0,0)', width: 3 } }, { type: 'line', x0: highCI, y0: 0, x1: highCI, y1: 250, line: { color: 'rgb(0,0,0)', width: 3 } } ] let low = [{ x: JSON.parse(JSON.stringify(lows)), type: 'histogram', xbins: { end: .05, size: 0.001, start: -.05 } }]; let high = [{ x: JSON.parse(JSON.stringify(highs)), type: 'histogram', xbins: { end: .05, size: 0.001, start: -.05 } }]; Plotly.react('sources', normal, { title: 'Random Values from Normal Distribution Centered at 0', xaxis: { range: [-0.25, 0.25] }, yaxis: { range: [0, 30], title: 'count'}, margin: { t: 30, b: 30 }, shapes: shapes }, { responsive: true }); Plotly.react('low', low, { title: 'Lower Bounds of ' + target + '% Confidence Intervals', xaxis: { range: [-.05, .05] }, yaxis: { title: 'count' }, margin: { t: 30, b: 30 } }, { responsive: true }); Plotly.react('high', high, { title: 'Upper Bounds of ' + target + '% Confidence Intervals', xaxis: { range: [-.05, .05] }, yaxis: { title: 'count' }, margin: { t: 30, b: 30 } }, { responsive: true }); $('#summary').html((100*(i - misses) / i).toFixed(1) + '% of trials so far contained the source mean (0) within their ' + target + '% confidence interval') if (i++ === 300) { clearInterval(interval); } } </script> theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-12165772069133050702019-05-23T19:58:00.001-07:002019-05-23T20:00:01.649-07:00How Do You Know If You Have a Biased Coin?Fun probability problems involving biased coins...<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-QXUfd2an15E/XOddy0mjsQI/AAAAAAAAFAk/GYcuc-m0xDUVMsaBpf4rmi3BufTWPZkpwCLcBGAs/s1600/United_States_Quarter.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="826" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-QXUfd2an15E/XOddy0mjsQI/AAAAAAAAFAk/GYcuc-m0xDUVMsaBpf4rmi3BufTWPZkpwCLcBGAs/s1600/United_States_Quarter.jpg" width="0%" /></a></div><div><a name='more'></a><h4>Problem</h4></div><div>Imagine you have a bag of 10 coins. 9 of them are fair, and 1 is biased. The biased coin has a 75% chance of landing on heads. If you draw a coin from the bag, flip it 5 times, and get 5 heads in a row, how confident should you be that you have the biased coin?</div><div><br /></div><h4>Solution</h4><div>Imagine you had all 10 coins and flipped them 5 times each. You repeat this thousands of times. You would get a bunch of outcomes. HHTHT, TTHTT, and so on where H = heads and T = tails. Some would also be HHHHH.<br /><br />A fair coin has a 50% chance of landing on heads or tails. A fair coin flipped n times has a (0.5^n) chance of getting all heads. Likewise, our biased coin flipped n times has a (0.75^n) chance of getting all heads.<br /><br />To find the probability of anything other than all heads, simply note that 'all heads + anything else = 1'. For the fair coin then, there's a 1 - (0.5^n) chance of getting anything other than all heads and that chance is 1 - (0.75^n) for the biased coin.</div><div><br /></div><div>Imagine you flip all 10 coins 1000 times. You'll have 4 types of outcomes described by the following table:</div><div><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><th></th><th>fair coins</th><th>biased coins</th></tr><tr><th>not HHHHH</th><td>1000*9*(1 - 0.5^5)</td><td>1000*(1 - 0.75^5)</td></tr><tr><th>HHHHH</th><td>1000*9*(0.5^5)</td><td>1000*(0.75^5)</td></tr></tbody></table><br /></div><div>Plugging that into a calculator, it's:</div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><th></th><th>fair coins</th><th>biased coins</th></tr><tr><th>not HHHHH</th><td>8718.75</td><td>762.70</td></tr><tr><th>HHHHH</th><td>281.25</td><td>237.30</td></tr></tbody></table><br /><div></div><div>Now...you take a random trial and find that it was all heads (HHHHH). Was that from a biased coin or a fair coin? There were 281.25 fair coins and 237.30 biased coins that yielded an HHHHH result. (237.30) / (281.25 + 237.30), or ~46% of the HHHHH results were from biased coins.</div><div><br />Knowing that outcome from a huge number of trials, you know that from 1 trial, getting HHHHH from a random coin means your confidence should be ~46% that it's the biased coin.</div><div><br /></div><div>Walking back through where those numbers came from, the probability of the biased coin given n flips when you have m fair coins for every biased coin is:<br /><br /><div style="text-align: center;"><b>probability of biased coin = (0.75^n)/(0.75^n + m*(0.5^n))</b></div></div><div><br /></div><div><br /><h4>Related problems</h4>How many heads in a row would you need to have 99% confidence? You know it has to be more than 5 since 5 only gave 46% confidence. Trying 10, that yields 86% confidence. Trying 15 yields 98%. 17 ends up being the first n that gives a result greater than 99%.<br /><br />Even with the biased coin landing on heads 75% of the time, you still need 17 heads in a row to be 99% sure you have the biased coin.</div><div><br /></div><div>What if there are only two coins in the bag...one fair and one biased? Doing the same analysis, you end up needing 12 heads in a row to be 99% confident that you have the biased coin.</div><div><br />Going back to the first one...imagine you grab a coin from the bag, flip it 5 times, and get HHHHH. Then you draw another coin from the bag. What are the odds that the second coin lands on heads when you flip it?<br /><br />There are two possibilities:<br /><ol><li>first coin was biased</li><li>first coin was fair</li></ol><div>If the first coin was biased, the next coin has to be fair since there was only one biased coin, so the probability of heads is 0.5. If the first coin was fair, there's a 8/9 chance of getting a fair coin next and a 1/9 chance of getting the biased coin since there are 9 coins left and 1 is biased. In that case, you have a (8/9)*(0.5) + (1/9)*(0.75) chance of getting heads.<br /><br />From the initial problem, you know there's a 46% chance that the first coin was biased since you got HHHHH from it. That means there's a 46% chance first coin was biased and 54% chance it was fair. Combining all of that:</div><div><br /></div><div style="text-align: center;"><b>chance of heads on second coin = 0.46*0.5 + 0.54*((8/9)*(0.5) + (1/9)*(0.75))</b></div><div style="text-align: center;"><br /></div><div style="text-align: left;">Plugging that into a calculator, you get a ~51.5% chance of heads on the second coin's flip.</div><div style="text-align: left;"><br /></div><div style="text-align: left;"><br /></div></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-48264317398764943102019-05-17T22:23:00.001-07:002019-05-17T22:24:29.345-07:00Plotly Hover Works but the Point Is Not Rendered<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/plotly.js/1.47.4/plotly.min.js"></script> Sometimes when using plotly, you'll plot something, it won't show up, and hovering over the point that didn't show up will work. What's going on?<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-uRSqZe5tjEs/XN-Wpnh3fYI/AAAAAAAAE_I/QZsteiynPiIR6BTuy-HXMQoXesfBSujwgCLcBGAs/s1600/example.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="386" data-original-width="1600" height="auto" src="https://4.bp.blogspot.com/-uRSqZe5tjEs/XN-Wpnh3fYI/AAAAAAAAE_I/QZsteiynPiIR6BTuy-HXMQoXesfBSujwgCLcBGAs/s1600/example.png" width="0%" /></a></div><br /><a name='more'></a><h4>Situation 1</h4><div>You've hit a bug. There are many that can cause this, particularly with 'scattergl'. Examples are <a href="https://github.com/plotly/plotly.js/issues/3751">here</a> and <a href="https://github.com/plotly/plotly.js/issues/2999">here</a>.<br /><br /></div><h4>Situation2</h4><div>There is a non-buggy way to have this happen. If you are in 'lines' mode and you plot a point that connects to no other points, you will get this behavior. A simple example is a plot of a single point. A less trivial example is something like the following:<br /><ul><li>x: [1, NaN, 2, NaN, 3]</li><li>y: [1, NaN, 2, NaN, 3]</li></ul><div><a href="http://www.somesolvedproblems.com/2019/05/how-to-make-plotly-faster-with-many.html">As I've covered before</a>, NaN's are not interpolated to in lines mode. The points above will not be rendered. Hover will work however. There is a workaround if you can stand the performance drop though. Simply use 'lines+markers'. The performance impact will vary based on your data set, but I've seen it generally be about 50% slower.</div></div><div><br /></div><div>I have embedded a simple example below and the <a href="https://codepen.io/rhamner/pen/WBGPoN">code is here</a>. These two plots are identical except that one uses 'lines' and one uses 'lines+markers'. You can easily see the 'hidden points with working hover' behavior if you hover over (1,1) or (2,2) in the 'lines' plot:</div><div><br /></div><div style="display: grid; grid-template-columns: 1fr 1fr;"><div id="plot" style="height: 400px;"></div><div id="plot2" style="height: 400px;"></div></div><div><br /></div><div><br /></div><div><br /></div><script>let x = [1, NaN, 2, NaN, 3, 4, 5, NaN, 6, NaN, 7]; let y = [1, NaN, 2, NaN, 3, 4, 5, NaN, 6, NaN, 7]; let trace = { x: x, y: y, type: 'scattergl', mode: 'lines+markers', line: { color: 'red', width: 5 }, marker: { color: 'red', size: 5, symbol: 'square' } } Plotly.react('plot', [trace], { title: 'lines+markers' }); trace.mode = 'lines'; Plotly.react('plot2', [trace], { title: 'lines' }); </script>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-25124970181503132912019-05-10T21:34:00.001-07:002019-05-10T21:51:17.919-07:00How to Make Plotly Faster with Many Traces?Plotly slows down with a lot of traces. I found a workaround to speed it up in certain situations.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-2aItGYXRNv4/XNZP2h7M2SI/AAAAAAAAE-I/4iWVp4rqxIUIh6pCGqf6yfDxoAxIpsJqACLcBGAs/s1600/good%2Bexample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="373" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-2aItGYXRNv4/XNZP2h7M2SI/AAAAAAAAE-I/4iWVp4rqxIUIh6pCGqf6yfDxoAxIpsJqACLcBGAs/s1600/good%2Bexample.png" width="0%" /></a></div><a name='more'></a><h4>Basic idea</h4><div>If you need to split traces for whatever reason (e.g., you don't want the interpolating line to draw right to left), a method that is faster than just making a new trace each time the plot moves to the left is to add a NaN value to split the trace. Plotly works much faster this way.<br /><br /></div><h4>Specific use case I had</h4><div>The problem I was working on was visualizing a massive number of frequency sweeps. Basically, you measure a signal at a bunch of different frequencies from low to high, change your configuration slightly (e.g., turn on a preamp), then measure again, and repeat for a lot of different configurations. It looks something like this:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-2aItGYXRNv4/XNZP2h7M2SI/AAAAAAAAE-I/4iWVp4rqxIUIh6pCGqf6yfDxoAxIpsJqACLcBGAs/s1600/good%2Bexample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="373" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-2aItGYXRNv4/XNZP2h7M2SI/AAAAAAAAE-I/4iWVp4rqxIUIh6pCGqf6yfDxoAxIpsJqACLcBGAs/s1600/good%2Bexample.png" width="80%" /></a></div><div><br /></div><div><br /></div><div>As you can see, you ideally want each trace to be separated. If you made it all one trace, the end point of one trace and the starting point of the next trace would be connected by a line like below:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-BXHCPfDf9Cw/XNZP8Gv1OII/AAAAAAAAE-M/Gw8I6lJIE5YJyZD-Ua31iGmDE_aZ5cS6ACLcBGAs/s1600/bad%2Bexample.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="353" data-original-width="1600" height="auto" src="https://3.bp.blogspot.com/-BXHCPfDf9Cw/XNZP8Gv1OII/AAAAAAAAE-M/Gw8I6lJIE5YJyZD-Ua31iGmDE_aZ5cS6ACLcBGAs/s1600/bad%2Bexample.png" width="80%" /></a></div><div><br /></div><div><br /></div><div>That's ugly and hard to read. A way around that is to add a NaN in between the point at the end of one line on the right and the beginning of the next line on the left. Here is a simple example code snippet:<br /><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #66d9ef;">let</span> <span style="color: #a6e22e;">x</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[];</span><br /><span style="color: #66d9ef;">let</span> <span style="color: #a6e22e;">y</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[];</span><br /><br /><span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">(</span><span style="color: #66d9ef;">let</span> <span style="color: #a6e22e;">i</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">;</span> <span style="color: #a6e22e;">i</span> <span style="color: #f92672;"><</span> <span style="color: #ae81ff;">1000</span><span style="color: #f8f8f2;">;</span> <span style="color: #a6e22e;">i</span><span style="color: #f92672;">++</span><span style="color: #f8f8f2;">)</span> <span style="color: #f8f8f2;">{</span><br /> <span style="color: #a6e22e;">x</span><span style="color: #f8f8f2;">.</span><span style="color: #a6e22e;">push</span><span style="color: #f8f8f2;">(</span><span style="color: #a6e22e;">i</span><span style="color: #f92672;">%</span><span style="color: #ae81ff;">100</span><span style="color: #f8f8f2;">);</span><br /> <span style="color: #a6e22e;">y</span><span style="color: #f8f8f2;">.</span><span style="color: #a6e22e;">push</span><span style="color: #f8f8f2;">(</span><span style="color: #ae81ff;">5</span><span style="color: #f92672;">*</span><span style="color: #f8f8f2;">Math.</span><span style="color: #a6e22e;">random</span><span style="color: #f8f8f2;">()</span> <span style="color: #f92672;">+</span> <span style="color: #ae81ff;">10</span><span style="color: #f92672;">*</span><span style="color: #f8f8f2;">(</span><span style="color: #a6e22e;">i</span><span style="color: #f92672;">/</span><span style="color: #ae81ff;">100</span><span style="color: #f8f8f2;">)</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">(</span><span style="color: #a6e22e;">i</span><span style="color: #f92672;">%</span><span style="color: #ae81ff;">1500</span><span style="color: #f8f8f2;">));</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">(</span><span style="color: #a6e22e;">i</span><span style="color: #f92672;">%</span><span style="color: #ae81ff;">100</span> <span style="color: #f92672;">===</span> <span style="color: #ae81ff;">99</span><span style="color: #f8f8f2;">)</span> <span style="color: #f8f8f2;">{</span><br /> <span style="color: #a6e22e;">x</span><span style="color: #f8f8f2;">.</span><span style="color: #a6e22e;">push</span><span style="color: #f8f8f2;">(</span><span style="color: #66d9ef;">NaN</span><span style="color: #f8f8f2;">);</span><br /> <span style="color: #a6e22e;">y</span><span style="color: #f8f8f2;">.</span><span style="color: #a6e22e;">push</span><span style="color: #f8f8f2;">(</span><span style="color: #66d9ef;">NaN</span><span style="color: #f8f8f2;">);</span><br /> <span style="color: #f8f8f2;">}</span><br /><span style="color: #f8f8f2;">}</span><br /><br /><span style="color: #a6e22e;">Plotly</span><span style="color: #f8f8f2;">.</span><span style="color: #a6e22e;">react</span><span style="color: #f8f8f2;">(</span><span style="color: #e6db74;">'plot'</span><span style="color: #f8f8f2;">,</span> <span style="color: #f8f8f2;">[{</span> <span style="color: #a6e22e;">x</span><span style="color: #f92672;">:</span> <span style="color: #a6e22e;">x</span><span style="color: #f8f8f2;">,</span> <span style="color: #a6e22e;">y</span><span style="color: #f92672;">:</span> <span style="color: #a6e22e;">y</span><span style="color: #f8f8f2;">,</span> <span style="color: #a6e22e;">type</span><span style="color: #f92672;">:</span> <span style="color: #e6db74;">'scattergl'</span><span style="color: #f8f8f2;">,</span> <span style="color: #a6e22e;">line</span><span style="color: #f92672;">:</span> <span style="color: #f8f8f2;">{</span> <span style="color: #a6e22e;">color</span><span style="color: #f92672;">:</span> <span style="color: #e6db74;">'magenta'</span> <span style="color: #f8f8f2;">}}],</span> <span style="color: #f8f8f2;">{</span> <span style="color: #a6e22e;">showlegend</span><span style="color: #f92672;">:</span> <span style="color: #66d9ef;">false</span> <span style="color: #f8f8f2;">});</span><br /></pre></div><br />Try running that with the if statement there, then commented out (<a href="https://codepen.io/rhamner/pen/zQqgbd">code is here</a>).<br /><br /></div><h4>Example implementation</h4><div><br /><a href="https://codepen.io/rhamner/pen/KLzOqK">Code is here</a></div><div><br /></div><div>I used the NaN trick above to plot thousands of traces without the many-trace slowdown you normally see in plotly. Basically, I wanted traces to ramp from left to right, have the left and right sides be different colors, and have no interpolating lines going from right to left. If you click the checkbox a couple of times, you can clearly see the performance improvement with the NaN trick. You can also see it by hovering over the plot and moving the mouse around. Without the NaN trick, you get really noticeable lag.<br /><br /></div><h4>Tradeoff</h4><div>This does lose the ability to get the trace id from the default hover or from a default legend. You can include arbitrary values in traces so you can still get it in a custom hover or click, and you can always create your own legend pretty easily.</div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-21876239098385138562019-05-02T21:04:00.000-07:002019-05-02T21:04:02.481-07:00Do Stocks Perform Better After a Gain or a Loss?Simple question...do stocks do better the day after stocks go up or the day after stocks go down?<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-dy7Kuzbgib8/XMu9JfXev4I/AAAAAAAAE8s/hL4tI7PQlvstDG8JcOVWTMaeF1YCUWBowCEwYBhgL/s1600/histogram-min.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="687" data-original-width="1249" height="auto" src="https://4.bp.blogspot.com/-dy7Kuzbgib8/XMu9JfXev4I/AAAAAAAAE8s/hL4tI7PQlvstDG8JcOVWTMaeF1YCUWBowCEwYBhgL/s1600/histogram-min.png" width="0%" /></a></div><div><a name='more'></a>To test this, I used the daily S&P 500 gains from 1950 through 2018. If the S&P 500 went up one day, the next day's gain went into the 'after gain' bucket. If the S&P 500 went down one day, the next day's gain went into the 'after loss' bucket. Here is the distribution of gains in each bucket:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-dy7Kuzbgib8/XMu9JfXev4I/AAAAAAAAE8s/hL4tI7PQlvstDG8JcOVWTMaeF1YCUWBowCEwYBhgL/s1600/histogram-min.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="687" data-original-width="1249" height="auto" src="https://4.bp.blogspot.com/-dy7Kuzbgib8/XMu9JfXev4I/AAAAAAAAE8s/hL4tI7PQlvstDG8JcOVWTMaeF1YCUWBowCEwYBhgL/s1600/histogram-min.png" width="80%" /></a></div><br />It's pretty clear there that stocks perform better following a gain than following a loss. The median values of those buckets are 0.1% and 0.0% for the 'after gain' and 'after loss' buckets respectively.<br /><br /><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-17032662698940003302019-04-21T21:01:00.000-07:002019-04-21T22:17:45.815-07:00How to Make Amazing Shrimp and GritsI often order shrimp and grits when we get seafood, and the one at <a href="https://salttraderscc.com/">Salt Traders</a> in Round Rock, TX inspired me to figure out how to make it well. Here is my Salt Traders shrimp and grits recipe attempt.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-V8a7H7RaxJA/XL06htCDAUI/AAAAAAAAE7E/2Cji8_59Jkcq614BbvBqmvEVaB64GGZmACLcBGAs/s1600/IMG_20190421_121854116%2B%25282%2529.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1600" data-original-width="1200" height="auto" src="https://4.bp.blogspot.com/-V8a7H7RaxJA/XL06htCDAUI/AAAAAAAAE7E/2Cji8_59Jkcq614BbvBqmvEVaB64GGZmACLcBGAs/s1600/IMG_20190421_121854116%2B%25282%2529.jpg" width="60%" /></a></div><br /><br /><a name='more'></a><h4>Total time: 30 minutes</h4><h4>Servings: 4</h4><h4><br /></h4><h4>Ingredients</h4><div><ul><li>1 pound shrimp (16/20 per pound is the size I used)</li><li>2 large bacon strips</li><li>1 cup grits</li><li>1 cup whipping cream</li><li>2 tablespoons butter</li><li>1 cup blue cheese</li><li>1 green onion</li><li>1/4 cup white onion</li><li>marinara sauce</li><li>cajun seasoning</li><li>honey</li><li>hot sauce</li><li>garlic powder</li></ul><h4>Directions</h4></div><div><ul><li>Dice the green onions and slice the bacon into small pieces</li><li>Bring a mixture of 3 cups of water and 1 cup of whipping cream to boil.</li><li>Add the grits, lower the heat, and let them cook.</li><li>When finished, stir in the butter and blue cheese and reduce to simmer.</li><li>While that simmers, cook the bacon and the shrimp in a skillet.</li><li>Push bacon and shrimp to the side when cooked, and cook the onions in the bacon grease.</li><li>Season the shrimp/bacon/onion mix with the cajun seasoning, honey, hot sauce, and garlic powder.</li><li>To serve, scoop the grits into a bowl, put a couple of spoons of marinara sauce on top, and put the shrimp/bacon/onion mix on top of that.</li></ul><h4>Result</h4></div><div>This was awesome. It tasted very similar to the Salt Traders' one from what I roughly remember. This is very heavy though. The rough nutritional info per serving (it makes 4) is below:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/--rqIBhNwVkU/XL08EekIQdI/AAAAAAAAE7U/MbdklMvs0dUBFVnheTyA2C_gR_W89-T7QCLcBGAs/s1600/image%2B%25283%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="541" data-original-width="359" height="auto" src="https://2.bp.blogspot.com/--rqIBhNwVkU/XL08EekIQdI/AAAAAAAAE7U/MbdklMvs0dUBFVnheTyA2C_gR_W89-T7QCLcBGAs/s1600/image%2B%25283%2529.png" width="50%" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: center;"><br /></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-25518793580773685832019-04-19T23:54:00.002-07:002019-04-19T23:54:58.320-07:00Real Life Examples of Various DistributionsYou might have heard of 'normal', 'poisson', and other 'distributions' What real-life situations result in those?<br /><div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-0jvi-JO8OlM/XLrBcboza-I/AAAAAAAAE6k/kJP_pgoJGuQZIlD47xWAk9fCADFhoxivACLcBGAs/s1600/U%2Bdist.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="404" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-0jvi-JO8OlM/XLrBcboza-I/AAAAAAAAE6k/kJP_pgoJGuQZIlD47xWAk9fCADFhoxivACLcBGAs/s800/U%2Bdist.gif" width="0%" /></a></div><a name='more'></a><h4>Example of Uniform</h4></div><div>A uniform distribution (often called 'rectangular') is one in which all values between two boundaries occur roughly equally. For example, if you roll a six-sided die, you're equally likely to get 1, 2, 3, 4, 5, or 6. If you rolled it 6,000 times, you'd probably get roughly 1,000 of each result. The results would form a uniform distribution from 1 to 6.</div><div><br /></div><div>Another example of something that's uniformly distributed is the digits of pi. Each digit makes up about 10% of the values. To show that, I put together a quick visualization of the first 500 digits. In this, each digit should occur roughly 50 times:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-KqgN9MW0OC0/XLrBOnv7SeI/AAAAAAAAE6c/hTCDMZ2QmakTmI5xZinOI7-glA52m5HugCLcBGAs/s1600/pi%2Bdist.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="481" data-original-width="1600" height="auto" src="https://3.bp.blogspot.com/-KqgN9MW0OC0/XLrBOnv7SeI/AAAAAAAAE6c/hTCDMZ2QmakTmI5xZinOI7-glA52m5HugCLcBGAs/s800/pi%2Bdist.gif" width="80%" /></a></div><div><br /><h4>Example of Normal</h4></div><div>A normal distribution looks like a bell. It's often called a 'bell curve' or a 'Gaussian'. Many things in nature have nearly-normal distributions...heights of men in the US...measurement errors...IQs. A cool thing related to them though is the <a href="https://en.wikipedia.org/wiki/Central_limit_theorem">Central Limit Theorem</a>. It roughly states that the means of many non-normal distributions are normally distributed.<br /><br />As a simple example of that, I generated 20 random values between 0 and 9 (uniform distribution with a mean of 4.5) 1000 times. Each iteration, I took the mean of those 20 random values, and made a histogram of the means found so far. You can see that it is roughly normal (bell-shaped):</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-WHsHVMIezCk/XLrBVZ3IlzI/AAAAAAAAE6g/3mpozn8fmjArI5pMaVGUp6Rd4eggZsPqwCLcBGAs/s1600/normal%2Bdist.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="404" data-original-width="1600" height="auto" src="https://4.bp.blogspot.com/-WHsHVMIezCk/XLrBVZ3IlzI/AAAAAAAAE6g/3mpozn8fmjArI5pMaVGUp6Rd4eggZsPqwCLcBGAs/s800/normal%2Bdist.gif" width="80%" /></a></div><div><br /></div><h4>Example of U</h4><div>A U distribution is one in which points are more likely to be at the edges of a range than in the middle. For example, if 40% of students in a class get A's, 40% get zero, and the remaining 20% get something in between, that would form a U distribution.</div><div><br /></div><div>A cool example of this distribution type is the position of an object with sinusoidal motion. Imagine measuring the angle of a pendulum every 1/100 seconds. It slows down on the sides, and speeds up in the middle, so more measurements will be at the edges than in the middle. I animated a perfect one here (a circle is sinusoidal motion in two dimensions...a pendulum is one...two dimensions looked cooler):<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-0jvi-JO8OlM/XLrBcboza-I/AAAAAAAAE6k/kJP_pgoJGuQZIlD47xWAk9fCADFhoxivACLcBGAs/s1600/U%2Bdist.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="404" data-original-width="1600" height="auto" src="https://1.bp.blogspot.com/-0jvi-JO8OlM/XLrBcboza-I/AAAAAAAAE6k/kJP_pgoJGuQZIlD47xWAk9fCADFhoxivACLcBGAs/s800/U%2Bdist.gif" width="80%" /></a></div><br /><h4>Example of Poisson</h4></div><div>Poisson distributions give the probability of something occurring a certain number of times if it typically occurs at a fixed rate and each occurrence is independent of previous occurrences. An example use case is an online tutoring service that typically gets 4 students in the period between 9 pm and 9:30 pm and wants to calculate the probability of getting 6 students in that period.</div><div><br /></div><div>Scores in the group stage of the World Cup can be modeled reasonably well with a Poisson distribution. I took the game scores from the last 6 World Cups and animated it below:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-0j3m_pUC3H8/XLrBiWA2QJI/AAAAAAAAE6s/NljFxwDq-VUULkpem1MACALo6tEyPs5gQCLcBGAs/s1600/poisson%2Bsoccer%2Bdist.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="462" data-original-width="1600" height="auto" src="https://4.bp.blogspot.com/-0j3m_pUC3H8/XLrBiWA2QJI/AAAAAAAAE6s/NljFxwDq-VUULkpem1MACALo6tEyPs5gQCLcBGAs/s800/poisson%2Bsoccer%2Bdist.gif" width="80%" /></a></div><div><br /><br /></div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-90613544581355845472019-04-16T17:45:00.004-07:002019-04-16T21:46:24.058-07:00Real S&P 500 Returns<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/plotly.js/1.47.2/plotly.min.js"></script> <br />I took a crack at visualizing actual returns for long-term, steady investments in the S&P 500.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-ny_WenJcLRQ/XLZ47wjdehI/AAAAAAAAE58/ivP4NqL6ZDgKxaisVow3KP13my87Kh5RgCLcBGAs/s1600/sp%2Byield%2Bmap.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="855" data-original-width="1490" height="auto" src="https://3.bp.blogspot.com/-ny_WenJcLRQ/XLZ47wjdehI/AAAAAAAAE58/ivP4NqL6ZDgKxaisVow3KP13my87Kh5RgCLcBGAs/s1600/sp%2Byield%2Bmap.png" width="0%" /></a></div><br /><a name='more'></a><h4>Results</h4><div>The plot below is the real yields. I didn't see an easy way to update the color bar, so read the '10' and '-10' as '>=10' and '<=-10'. Those are all percentages. To read it, simply pick a row for your starting year, and read-left to right to see how the investment performed. For example, if you start at 1928, then 1928 is 1 year, 1929 is 2 years, etc.<br /><br />If you start at 1966 and check 1995 and see a value of 4.6%, that means that the real yield if you invested steadily for 30 years starting in 1996 is 4.6%. Hovering lets you see more detailed info.</div><div><br /></div><div>Read below the plot for methodology.</div><br /><div id="plot" style="height: 57vh; width: 100%;"></div><div id="hover" style="height: 28vh; width: 100%;"></div><br /><script>x = [1928, 1929, 1930, 1931, 1932, 1933, 1934, 1935, 1936, 1937, 1938, 1939, 1940, 1941, 1942, 1943, 1944, 1945, 1946, 1947, 1948, 1949, 1950, 1951, 1952, 1953, 1954, 1955, 1956, 1957, 1958, 1959, 1960, 1961, 1962, 1963, 1964, 1965, 1966, 1967, 1968, 1969, 1970, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018]; z = [[46.4, 9.0, -8.2, -22.3, -12.7, 5.4, 4.2, 11.4, 15.1, 4.6, 8.6, 7.4, 5.0, 2.2, 3.0, 4.6, 5.8, 7.8, 5.8, 4.6, 3.8, 5.0, 6.2, 7.0, 7.4, 6.6, 8.6, 9.8, 9.8, 8.6, 9.8, 9.8, 9.4, 9.8, 9.0, 9.4, 9.8, 9.8, 8.6, 9.0, 9.0, 8.2, 7.8, 7.8, 8.2, 7.4, 5.8, 6.2, 6.6, 5.8, 5.8, 5.8, 6.2, 5.4, 5.8, 5.8, 5.8, 6.2, 6.6, 6.2, 6.6, 7.0, 6.6, 7.0, 7.0, 7.0, 6.6, 7.0, 7.4, 7.8, 7.8, 8.2, 7.8, 7.4, 6.6, 7.0, 7.0, 7.0, 7.0, 7.0, 6.2, 6.2, 6.6, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 7.0, 6.6], [NaN, -8.2, -19.1, -29.5, -15.1, 6.6, 5.0, 13.5, 16.7, 5.0, 9.0, 7.8, 5.4, 2.2, 3.0, 4.6, 5.8, 8.2, 5.8, 4.6, 3.8, 5.0, 6.6, 7.0, 7.4, 7.0, 9.0, 10.2, 9.8, 8.6, 9.8, 9.8, 9.4, 9.8, 9.0, 9.4, 9.8, 9.8, 9.0, 9.4, 9.0, 8.2, 7.8, 8.2, 8.2, 7.4, 5.8, 6.2, 6.6, 5.8, 5.8, 5.8, 6.2, 5.4, 5.8, 5.8, 5.8, 6.2, 6.6, 6.2, 6.6, 7.0, 6.6, 7.0, 7.0, 7.0, 6.6, 7.0, 7.4, 7.8, 7.8, 8.2, 7.8, 7.4, 6.6, 7.0, 7.0, 7.0, 7.0, 7.0, 6.2, 6.2, 6.6, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 7.0, 6.6], [NaN, NaN, -23.5, -33.1, -13.5, 12.7, 8.6, 17.5, 20.7, 6.6, 11.0, 9.4, 6.2, 3.0, 3.4, 5.4, 6.6, 9.0, 6.2, 5.0, 4.2, 5.4, 7.0, 7.4, 7.8, 7.0, 9.4, 10.6, 10.2, 9.0, 10.2, 10.2, 9.8, 10.2, 9.4, 9.8, 10.2, 10.2, 9.0, 9.4, 9.4, 8.6, 8.2, 8.2, 8.2, 7.4, 5.8, 6.6, 6.6, 6.2, 5.8, 5.8, 6.2, 5.4, 5.8, 6.2, 5.8, 6.2, 6.6, 6.6, 6.6, 7.0, 6.6, 7.0, 7.0, 7.0, 6.6, 7.0, 7.4, 7.8, 8.2, 8.2, 7.8, 7.4, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.2, 6.2, 6.6, 6.2, 6.6, 6.6, 7.0, 6.6, 7.0, 7.0, 6.6], [NaN, NaN, NaN, -38.4, -9.0, 22.3, 13.9, 23.1, 25.1, 8.2, 12.7, 10.6, 7.0, 3.0, 3.8, 5.8, 7.0, 9.4, 6.6, 5.0, 4.2, 5.4, 7.0, 7.4, 8.2, 7.4, 9.8, 11.0, 10.6, 9.0, 10.6, 10.6, 9.8, 10.6, 9.4, 9.8, 10.2, 10.2, 9.4, 9.8, 9.4, 8.6, 8.2, 8.2, 8.6, 7.4, 5.8, 6.6, 6.6, 6.2, 5.8, 5.8, 6.2, 5.4, 5.8, 6.2, 5.8, 6.2, 6.6, 6.6, 6.6, 7.0, 6.6, 7.0, 7.0, 7.0, 6.6, 7.4, 7.4, 7.8, 8.2, 8.2, 7.8, 7.4, 7.0, 7.0, 7.0, 7.0, 7.0, 7.0, 6.2, 6.2, 6.6, 6.2, 6.6, 6.6, 7.0, 6.6, 7.0, 7.0, 6.6], [NaN, NaN, NaN, NaN, 1.4, 38.4, 19.1, 28.3, 29.1, 7.8, 13.1, 10.6, 6.2, 2.2, 3.0, 5.0, 6.2, 9.0, 5.8, 4.2, 3.4, 5.0, 6.6, 7.0, 7.8, 7.0, 9.4, 10.6, 10.2, 9.0, 10.2, 10.2, 9.8, 10.2, 9.4, 9.8, 10.2, 10.2, 9.0, 9.4, 9.4, 8.6, 8.2, 8.2, 8.2, 7.4, 5.8, 6.2, 6.6, 5.8, 5.8, 5.8, 5.8, 5.4, 5.4, 5.8, 5.8, 6.2, 6.6, 6.2, 6.6, 7.0, 6.6, 7.0, 6.6, 6.6, 6.6, 7.0, 7.4, 7.8, 7.8, 8.2, 7.8, 7.4, 6.6, 7.0, 7.0, 7.0, 7.0, 7.0, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.6, 6.6, 6.6, 7.0, 6.6], [NaN, NaN, NaN, NaN, NaN, 50.0, 16.7, 28.7, 29.1, 3.4, 10.2, 7.8, 3.4, -1.0, 0.6, 3.0, 4.6, 7.8, 4.6, 3.0, 2.2, 3.4, 5.4, 6.2, 7.0, 6.2, 8.6, 10.2, 9.8, 8.2, 9.8, 9.8, 9.0, 9.8, 9.0, 9.4, 9.4, 9.8, 8.6, 9.0, 9.0, 8.2, 7.8, 7.8, 7.8, 7.0, 5.0, 5.8, 6.2, 5.4, 5.0, 5.4, 5.4, 5.0, 5.0, 5.4, 5.4, 5.8, 6.2, 5.8, 6.2, 6.6, 6.2, 6.6, 6.6, 6.6, 6.2, 7.0, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.6, 6.6, 6.6, 6.6, 6.6, 6.6, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 6.6], [NaN, NaN, NaN, NaN, NaN, NaN, -4.2, 24.3, 26.3, -4.2, 5.8, 4.2, -0.2, -4.6, -2.2, 0.6, 3.0, 6.6, 3.0, 1.4, 0.6, 2.2, 4.6, 5.4, 6.2, 5.4, 8.2, 9.4, 9.4, 7.8, 9.4, 9.4, 8.6, 9.4, 8.6, 9.0, 9.4, 9.4, 8.2, 8.6, 8.6, 7.8, 7.4, 7.4, 7.4, 6.6, 4.6, 5.4, 5.8, 5.0, 5.0, 5.0, 5.0, 4.6, 4.6, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.4, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, 43.6, 34.3, -7.8, 5.8, 3.8, -1.0, -5.4, -3.0, 0.6, 3.0, 7.0, 3.0, 1.4, 0.6, 2.2, 4.6, 5.4, 6.2, 5.4, 8.2, 9.8, 9.4, 7.8, 9.4, 9.4, 9.0, 9.8, 8.6, 9.0, 9.4, 9.4, 8.2, 9.0, 8.6, 7.8, 7.4, 7.4, 7.8, 6.6, 4.6, 5.4, 5.8, 5.0, 5.0, 5.0, 5.0, 4.6, 4.6, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 29.9, -20.7, 2.6, 1.4, -3.4, -7.8, -4.2, 0.2, 2.6, 7.0, 3.0, 1.0, 0.2, 2.2, 4.6, 5.4, 6.2, 5.4, 8.6, 10.2, 9.8, 7.8, 9.8, 9.8, 9.0, 9.8, 8.6, 9.4, 9.4, 9.4, 8.6, 9.0, 8.6, 7.8, 7.4, 7.4, 7.8, 6.6, 4.6, 5.4, 5.8, 5.0, 4.6, 4.6, 5.0, 4.6, 4.6, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -37.6, 3.8, 1.8, -3.8, -8.6, -4.2, 0.2, 3.4, 7.8, 3.4, 1.0, 0.2, 2.2, 5.0, 5.8, 6.6, 5.4, 9.0, 10.6, 10.2, 8.2, 10.2, 10.2, 9.4, 10.2, 9.0, 9.4, 9.8, 9.8, 8.6, 9.0, 9.0, 7.8, 7.4, 7.4, 7.8, 6.6, 4.6, 5.4, 5.8, 5.0, 4.6, 4.6, 5.0, 4.2, 4.6, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 31.9, 11.0, 0.2, -7.0, -2.2, 2.6, 5.4, 10.2, 4.6, 1.8, 0.6, 3.0, 5.8, 6.6, 7.4, 6.2, 9.8, 11.4, 11.0, 8.6, 10.6, 10.6, 9.8, 10.6, 9.4, 9.8, 10.2, 10.2, 9.0, 9.4, 9.4, 8.2, 7.8, 7.8, 7.8, 6.6, 4.6, 5.4, 5.8, 5.0, 5.0, 5.0, 5.4, 4.6, 4.6, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 6.6, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 6.2, 5.8, 6.2, 6.2, 6.6, 6.2, 6.6, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 0.2, -7.0, -12.2, -5.0, 1.4, 4.6, 10.2, 3.8, 1.0, -0.2, 2.2, 5.4, 6.2, 7.0, 5.8, 9.4, 11.4, 10.6, 8.6, 10.6, 10.6, 9.8, 10.6, 9.0, 9.8, 10.2, 10.2, 8.6, 9.4, 9.0, 7.8, 7.4, 7.4, 7.8, 6.6, 4.6, 5.4, 5.8, 5.0, 4.6, 4.6, 5.0, 4.2, 4.6, 5.0, 5.0, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 7.0, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -11.4, -15.5, -5.4, 2.2, 5.4, 11.4, 3.8, 0.6, -1.0, 1.8, 5.4, 6.2, 7.0, 5.8, 9.8, 11.4, 11.0, 8.6, 10.6, 10.6, 9.8, 10.6, 9.0, 9.8, 10.2, 10.2, 8.6, 9.4, 9.0, 7.8, 7.4, 7.4, 7.8, 6.6, 4.2, 5.0, 5.4, 4.6, 4.6, 4.6, 5.0, 4.2, 4.2, 5.0, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2, 5.8, 6.6, 6.6, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -17.1, -2.6, 5.0, 7.8, 14.3, 4.6, 0.2, -1.4, 1.8, 5.4, 6.2, 7.4, 5.8, 9.8, 11.8, 11.4, 8.6, 11.0, 11.0, 9.8, 11.0, 9.4, 9.8, 10.2, 10.2, 8.6, 9.4, 9.0, 7.8, 7.4, 7.4, 7.8, 6.2, 4.2, 5.0, 5.4, 4.6, 4.2, 4.2, 4.6, 3.8, 4.2, 4.6, 4.6, 5.4, 5.4, 5.4, 5.4, 6.2, 5.8, 6.2, 6.2, 6.2, 5.8, 6.6, 6.6, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 7.4, 11.0, 11.8, 18.3, 5.4, 0.2, -1.4, 1.8, 5.8, 7.0, 7.8, 5.8, 10.6, 12.7, 11.8, 9.0, 11.4, 11.4, 10.2, 11.0, 9.4, 10.2, 10.6, 10.6, 9.0, 9.4, 9.4, 7.8, 7.4, 7.4, 7.8, 6.2, 4.2, 5.0, 5.4, 4.6, 4.2, 4.2, 4.6, 3.8, 4.2, 4.6, 4.6, 5.0, 5.4, 5.4, 5.4, 6.2, 5.8, 6.2, 6.2, 6.2, 5.8, 6.6, 6.6, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 17.9, 15.5, 22.7, 5.8, -0.2, -1.8, 1.8, 6.6, 7.4, 8.2, 6.6, 11.4, 13.5, 12.7, 9.4, 11.8, 11.8, 10.6, 11.4, 9.8, 10.6, 10.6, 10.6, 9.0, 9.8, 9.4, 8.2, 7.4, 7.4, 7.8, 6.6, 3.8, 5.0, 5.4, 4.6, 4.2, 4.2, 4.6, 3.8, 4.2, 4.6, 4.6, 5.0, 5.4, 5.4, 5.4, 6.2, 5.8, 6.2, 6.2, 6.2, 5.8, 6.6, 6.6, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.6, 6.6, 6.6, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 17.1, 26.3, 3.8, -2.2, -3.8, 1.0, 6.6, 7.4, 8.6, 6.2, 11.8, 14.3, 13.1, 9.4, 12.2, 12.2, 10.6, 11.8, 9.8, 10.6, 11.0, 10.6, 9.0, 9.8, 9.4, 8.2, 7.4, 7.4, 7.8, 6.2, 3.8, 4.6, 5.4, 4.2, 4.2, 4.2, 4.6, 3.8, 3.8, 4.6, 4.2, 5.0, 5.4, 5.4, 5.4, 6.2, 5.4, 6.2, 6.2, 6.2, 5.8, 6.6, 6.6, 7.4, 7.8, 7.8, 7.4, 7.0, 6.2, 6.6, 6.6, 6.2, 6.6, 6.2, 5.4, 5.4, 5.8, 5.4, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 32.7, -1.8, -6.6, -7.4, -0.6, 5.8, 7.0, 8.2, 5.8, 11.8, 14.3, 13.1, 9.4, 12.2, 11.8, 10.6, 11.8, 9.8, 10.6, 10.6, 10.6, 9.0, 9.4, 9.4, 7.8, 7.4, 7.4, 7.8, 6.2, 3.4, 4.2, 5.0, 3.8, 3.8, 3.8, 4.2, 3.4, 3.8, 4.2, 4.2, 5.0, 5.0, 5.0, 5.4, 5.8, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 6.6, 7.0, 7.4, 7.8, 7.4, 6.6, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 5.0, 5.4, 5.8, 5.4, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -15.5, -13.1, -11.4, -2.2, 5.8, 7.0, 8.2, 5.4, 12.2, 15.1, 13.5, 9.4, 12.2, 12.2, 10.6, 11.8, 9.8, 10.6, 10.6, 10.6, 9.0, 9.4, 9.4, 7.8, 7.0, 7.0, 7.4, 5.8, 3.0, 4.2, 4.6, 3.8, 3.4, 3.4, 3.8, 3.0, 3.4, 4.2, 3.8, 4.6, 5.0, 5.0, 5.0, 5.8, 5.4, 5.8, 5.8, 5.8, 5.4, 6.2, 6.6, 7.0, 7.4, 7.8, 7.4, 6.6, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 5.0, 5.4, 5.4, 5.4, 5.4, 5.8, 6.2, 5.8, 6.2, 6.2, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -8.2, -8.6, 1.8, 10.2, 10.6, 11.0, 7.4, 14.7, 17.1, 15.1, 10.6, 13.9, 13.5, 11.4, 12.7, 10.2, 11.0, 11.4, 11.4, 9.4, 9.8, 9.8, 8.2, 7.4, 7.4, 7.8, 5.8, 3.0, 4.2, 5.0, 3.8, 3.4, 3.4, 4.2, 3.0, 3.4, 4.2, 3.8, 4.6, 5.0, 5.0, 5.0, 5.8, 5.4, 5.8, 5.8, 5.8, 5.4, 6.2, 6.6, 7.0, 7.4, 7.8, 7.4, 6.6, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 5.0, 5.4, 5.4, 5.4, 5.4, 6.2, 6.2, 6.2, 6.2, 6.2, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -2.2, 9.4, 17.5, 15.9, 15.5, 9.8, 17.9, 20.3, 17.5, 12.2, 15.5, 15.1, 12.7, 13.9, 11.4, 12.2, 12.2, 12.2, 9.8, 10.6, 10.2, 8.6, 7.8, 7.8, 8.2, 6.2, 3.4, 4.6, 5.0, 4.2, 3.8, 3.8, 4.2, 3.4, 3.8, 4.2, 4.2, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4, 6.2, 5.8, 5.8, 5.8, 6.6, 6.6, 7.4, 7.8, 8.2, 7.4, 7.0, 6.2, 6.6, 6.6, 6.2, 6.6, 6.2, 5.0, 5.4, 5.8, 5.4, 5.8, 6.2, 6.2, 6.2, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 19.9, 25.9, 20.7, 18.3, 11.4, 20.3, 22.7, 19.1, 13.1, 16.7, 15.9, 13.5, 14.7, 11.8, 12.7, 12.7, 12.7, 10.2, 11.0, 10.6, 8.6, 7.8, 7.8, 8.2, 6.2, 3.0, 4.2, 5.0, 3.8, 3.4, 3.8, 4.2, 3.0, 3.4, 4.2, 4.2, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4, 6.2, 5.8, 5.8, 5.8, 6.6, 6.6, 7.4, 7.8, 8.2, 7.4, 7.0, 6.2, 6.6, 6.6, 6.2, 6.6, 6.2, 5.0, 5.4, 5.8, 5.4, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 29.1, 19.1, 16.7, 8.2, 19.5, 22.7, 18.7, 11.4, 15.9, 15.1, 12.2, 13.9, 10.6, 11.8, 11.8, 11.8, 9.4, 10.2, 9.8, 7.8, 7.0, 7.0, 7.4, 5.4, 2.2, 3.4, 4.2, 3.0, 2.6, 3.0, 3.4, 2.6, 3.0, 3.4, 3.4, 4.2, 4.6, 4.6, 5.0, 5.4, 5.0, 5.8, 5.4, 5.4, 5.4, 6.2, 6.6, 7.0, 7.4, 7.8, 7.4, 6.6, 5.8, 6.2, 6.2, 6.2, 6.2, 6.2, 4.6, 5.4, 5.4, 5.4, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 14.7, 13.5, 4.2, 19.5, 23.1, 18.3, 10.2, 15.1, 14.3, 11.4, 13.5, 9.8, 11.0, 11.4, 11.4, 8.6, 9.4, 9.4, 7.0, 6.2, 6.6, 7.0, 5.0, 1.4, 3.0, 3.8, 2.6, 2.2, 2.2, 3.0, 1.8, 2.2, 3.0, 3.0, 3.8, 4.2, 4.2, 4.6, 5.4, 4.6, 5.4, 5.4, 5.4, 5.0, 5.8, 6.2, 7.0, 7.4, 7.8, 7.0, 6.6, 5.4, 5.8, 5.8, 5.8, 6.2, 5.8, 4.6, 5.0, 5.0, 5.0, 5.4, 5.8, 5.8, 5.8, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 15.9, 3.4, 23.9, 26.7, 20.3, 10.2, 16.3, 15.1, 11.8, 13.9, 10.2, 11.4, 11.8, 11.4, 8.6, 9.8, 9.4, 7.0, 6.2, 6.6, 7.0, 4.6, 1.0, 2.6, 3.4, 2.2, 1.8, 2.2, 3.0, 1.8, 2.2, 3.0, 3.0, 3.8, 4.2, 4.2, 4.6, 5.0, 4.6, 5.4, 5.4, 5.4, 5.0, 5.8, 6.2, 7.0, 7.4, 7.8, 7.0, 6.2, 5.4, 5.8, 5.8, 5.8, 5.8, 5.8, 4.6, 5.0, 5.0, 5.0, 5.0, 5.8, 5.8, 5.8, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -1.8, 29.9, 31.1, 21.5, 9.4, 16.7, 15.1, 11.4, 13.9, 9.4, 11.0, 11.4, 11.4, 8.2, 9.4, 9.0, 6.6, 5.8, 6.2, 6.6, 4.2, 0.6, 2.2, 3.0, 1.8, 1.4, 1.8, 2.6, 1.4, 1.8, 2.6, 2.6, 3.4, 4.2, 3.8, 4.2, 5.0, 4.2, 5.0, 5.0, 5.0, 5.0, 5.8, 6.2, 6.6, 7.4, 7.4, 7.0, 6.2, 5.4, 5.8, 5.8, 5.8, 5.8, 5.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.8, 5.4, 5.8, 5.8, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 50.0, 39.6, 22.7, 7.8, 16.7, 15.1, 11.0, 13.5, 9.0, 10.6, 11.0, 11.0, 7.8, 9.0, 8.6, 6.2, 5.4, 5.4, 6.2, 3.8, -0.2, 1.4, 2.6, 1.0, 1.0, 1.0, 1.8, 0.6, 1.4, 2.2, 2.2, 3.0, 3.8, 3.4, 3.8, 4.6, 4.2, 5.0, 5.0, 5.0, 4.6, 5.4, 5.8, 6.6, 7.0, 7.4, 6.6, 6.2, 5.0, 5.8, 5.8, 5.4, 5.8, 5.8, 4.2, 4.6, 5.0, 4.6, 5.0, 5.4, 5.4, 5.4, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 33.1, 15.1, -0.2, 13.5, 12.7, 8.2, 11.8, 7.0, 9.0, 9.8, 9.8, 6.6, 8.2, 7.8, 5.0, 4.2, 4.6, 5.4, 3.0, -1.4, 0.6, 1.8, 0.2, 0.2, 0.2, 1.4, -0.2, 0.6, 1.4, 1.4, 2.6, 3.4, 3.0, 3.4, 4.2, 3.8, 4.6, 4.6, 4.6, 4.2, 5.4, 5.8, 6.2, 7.0, 7.4, 6.6, 5.8, 5.0, 5.4, 5.4, 5.4, 5.4, 5.4, 3.8, 4.6, 4.6, 4.6, 4.6, 5.4, 5.4, 5.4, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 5.8, -7.8, 12.7, 11.4, 7.0, 11.0, 5.8, 8.6, 9.4, 9.4, 5.8, 7.4, 7.0, 4.6, 3.4, 3.8, 5.0, 2.2, -2.2, -0.2, 1.0, -0.2, -0.6, -0.2, 0.6, -0.6, 0.2, 1.0, 1.0, 2.2, 3.0, 3.0, 3.0, 4.2, 3.4, 4.2, 4.2, 4.2, 4.2, 5.0, 5.4, 6.2, 6.6, 7.0, 6.6, 5.8, 4.6, 5.4, 5.4, 5.4, 5.4, 5.4, 3.8, 4.2, 4.6, 4.2, 4.6, 5.0, 5.4, 5.0, 5.4, 5.4, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -13.5, 18.3, 13.9, 7.4, 12.2, 5.8, 8.6, 9.8, 9.8, 5.4, 7.4, 7.0, 4.2, 3.4, 3.8, 4.6, 1.8, -3.0, -0.6, 0.6, -0.6, -1.0, -0.6, 0.6, -1.0, -0.2, 1.0, 0.6, 2.2, 2.6, 2.6, 3.0, 4.2, 3.4, 4.2, 4.2, 4.2, 3.8, 5.0, 5.4, 6.2, 6.6, 7.0, 6.6, 5.8, 4.6, 5.4, 5.4, 5.0, 5.4, 5.4, 3.8, 4.2, 4.6, 4.2, 4.6, 5.0, 5.0, 5.0, 5.4, 5.4, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 40.0, 19.9, 8.6, 14.3, 5.8, 9.4, 10.6, 10.2, 5.8, 7.8, 7.4, 4.2, 3.0, 3.8, 4.6, 1.4, -3.4, -1.0, 0.6, -1.0, -1.4, -1.0, 0.2, -1.4, -0.2, 0.6, 0.6, 2.2, 2.6, 2.6, 3.0, 3.8, 3.4, 4.2, 4.2, 4.2, 3.8, 5.0, 5.4, 6.2, 7.0, 7.0, 6.6, 5.8, 4.6, 5.4, 5.4, 5.0, 5.4, 5.4, 3.8, 4.2, 4.6, 4.2, 4.6, 5.0, 5.0, 5.0, 5.4, 5.4, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 11.4, 2.6, 12.7, 3.4, 8.2, 9.8, 9.8, 4.6, 7.4, 7.0, 3.4, 2.2, 3.0, 4.2, 1.0, -4.6, -1.4, 0.2, -1.8, -1.8, -1.4, -0.2, -1.8, -0.6, 0.2, 0.2, 1.8, 2.6, 2.2, 2.6, 3.8, 3.0, 4.2, 3.8, 4.2, 3.8, 5.0, 5.4, 6.2, 7.0, 7.0, 6.6, 5.8, 4.6, 5.0, 5.4, 5.0, 5.4, 5.0, 3.4, 4.2, 4.2, 4.2, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -1.4, 15.1, 1.8, 8.6, 10.2, 10.2, 4.2, 7.0, 6.6, 3.0, 1.8, 2.6, 3.8, 0.6, -5.4, -2.2, -0.6, -2.2, -2.6, -1.8, -0.6, -2.2, -1.0, 0.2, -0.2, 1.4, 2.2, 2.2, 2.6, 3.8, 3.0, 3.8, 3.8, 3.8, 3.8, 4.6, 5.4, 6.2, 6.6, 7.0, 6.6, 5.8, 4.6, 5.0, 5.0, 5.0, 5.4, 5.0, 3.4, 4.2, 4.2, 4.2, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 25.5, 1.0, 9.8, 11.4, 11.0, 3.8, 7.4, 7.0, 2.6, 1.4, 2.2, 3.8, -0.2, -6.2, -2.6, -0.6, -2.6, -3.0, -2.2, -1.0, -2.6, -1.4, -0.2, -0.2, 1.4, 2.2, 2.2, 2.6, 3.8, 3.0, 3.8, 3.8, 3.8, 3.8, 4.6, 5.4, 6.2, 7.0, 7.4, 6.6, 5.8, 4.6, 5.0, 5.0, 5.0, 5.0, 5.0, 3.4, 3.8, 4.2, 4.2, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -9.8, 9.4, 11.8, 11.0, 2.6, 7.0, 6.6, 1.8, 0.6, 1.8, 3.4, -0.6, -7.4, -3.4, -1.4, -3.4, -3.4, -3.0, -1.4, -3.0, -1.8, -0.6, -0.6, 1.0, 1.8, 1.8, 2.2, 3.4, 2.6, 3.8, 3.8, 3.8, 3.4, 4.6, 5.4, 6.2, 7.0, 7.4, 6.6, 5.4, 4.2, 5.0, 5.0, 5.0, 5.0, 5.0, 3.4, 3.8, 4.2, 3.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 21.1, 16.7, 13.5, 2.2, 7.4, 6.6, 1.0, -0.2, 1.4, 3.0, -1.4, -8.2, -4.2, -1.8, -3.8, -4.2, -3.4, -1.8, -3.4, -2.2, -1.0, -1.0, 1.0, 1.8, 1.8, 2.2, 3.4, 2.6, 3.8, 3.8, 3.8, 3.4, 4.6, 5.4, 6.2, 7.0, 7.4, 6.6, 5.4, 4.2, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.8, 4.2, 3.8, 4.2, 4.6, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 15.1, 11.8, -1.4, 6.2, 5.8, -0.6, -1.4, 0.2, 2.6, -2.2, -9.8, -5.0, -2.6, -4.6, -5.0, -3.8, -2.2, -4.2, -3.0, -1.4, -1.4, 0.6, 1.4, 1.4, 1.8, 3.4, 2.6, 3.4, 3.4, 3.8, 3.4, 4.6, 5.0, 6.2, 7.0, 7.4, 6.6, 5.4, 4.2, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.8, 3.8, 3.8, 4.2, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 10.6, -5.8, 5.8, 5.4, -1.8, -2.6, -0.6, 2.2, -3.4, -11.4, -5.8, -3.0, -5.4, -5.4, -4.6, -2.6, -4.6, -3.4, -1.4, -1.8, 0.6, 1.4, 1.4, 1.8, 3.4, 2.2, 3.4, 3.4, 3.4, 3.0, 4.6, 5.0, 6.2, 7.0, 7.4, 6.6, 5.4, 4.2, 5.0, 5.0, 4.6, 5.0, 5.0, 3.0, 3.8, 3.8, 3.8, 3.8, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -12.7, 7.4, 5.8, -3.0, -3.8, -1.0, 2.2, -3.8, -12.7, -6.6, -3.4, -5.8, -6.2, -5.0, -3.0, -5.0, -3.4, -1.8, -1.8, 0.2, 1.4, 1.0, 1.8, 3.4, 2.2, 3.4, 3.4, 3.4, 3.0, 4.6, 5.0, 6.2, 7.0, 7.4, 6.6, 5.4, 4.2, 5.0, 5.0, 4.6, 5.0, 5.0, 3.0, 3.4, 3.8, 3.8, 3.8, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 19.9, 9.8, -3.4, -3.8, -0.6, 2.6, -4.2, -13.9, -7.0, -3.8, -6.2, -6.2, -5.0, -3.0, -5.4, -3.8, -1.8, -1.8, 0.2, 1.4, 1.4, 1.8, 3.4, 2.2, 3.4, 3.4, 3.8, 3.4, 4.6, 5.4, 6.2, 7.0, 7.4, 6.6, 5.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.8, 3.8, 3.8, 3.8, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 6.2, -8.2, -7.0, -1.8, 2.2, -5.4, -15.9, -7.8, -4.2, -7.0, -7.0, -5.8, -3.4, -5.8, -4.2, -2.2, -2.2, 0.2, 1.4, 1.4, 1.8, 3.4, 2.2, 3.8, 3.4, 3.8, 3.4, 4.6, 5.4, 6.6, 7.4, 7.8, 6.6, 5.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.0, 3.0, 3.4, 3.8, 3.8, 3.8, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -13.1, -7.4, -1.0, 3.8, -5.8, -17.5, -8.2, -3.8, -7.0, -7.0, -5.8, -3.0, -5.8, -3.8, -1.8, -2.2, 0.6, 1.8, 1.4, 2.2, 3.8, 2.6, 3.8, 3.8, 3.8, 3.4, 5.0, 5.4, 6.6, 7.4, 7.8, 7.0, 5.8, 4.2, 5.0, 5.0, 5.0, 5.4, 5.0, 3.0, 3.8, 3.8, 3.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -2.2, 3.8, 7.8, -5.4, -18.3, -7.8, -3.0, -6.6, -6.6, -5.4, -2.6, -5.4, -3.8, -1.4, -1.8, 1.0, 2.2, 1.8, 2.6, 4.2, 2.6, 4.2, 4.2, 4.2, 3.8, 5.0, 5.8, 7.0, 7.8, 8.2, 7.4, 6.2, 4.6, 5.4, 5.4, 5.0, 5.4, 5.4, 3.0, 3.8, 4.2, 3.8, 4.2, 5.0, 5.0, 5.0, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 9.4, 11.8, -6.2, -20.7, -7.8, -2.6, -6.6, -6.6, -5.0, -2.2, -5.4, -3.4, -1.4, -1.4, 1.4, 2.6, 2.2, 2.6, 4.2, 3.0, 4.6, 4.2, 4.2, 3.8, 5.4, 6.2, 7.4, 8.2, 8.6, 7.4, 6.2, 4.6, 5.4, 5.4, 5.4, 5.4, 5.4, 3.0, 3.8, 4.2, 3.8, 4.2, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 15.1, -10.2, -25.5, -8.2, -1.8, -7.0, -7.0, -5.0, -2.2, -5.8, -3.4, -1.0, -1.4, 1.4, 2.6, 2.2, 3.0, 4.6, 3.0, 4.6, 4.6, 4.6, 3.8, 5.8, 6.2, 7.4, 8.6, 9.0, 7.8, 6.6, 4.6, 5.4, 5.8, 5.4, 5.8, 5.4, 3.0, 3.8, 4.2, 3.8, 4.2, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -19.5, -31.9, -7.8, -0.6, -7.0, -7.0, -5.0, -1.8, -5.8, -3.8, -1.0, -1.4, 1.4, 3.0, 2.6, 3.0, 5.0, 3.4, 5.0, 4.6, 4.6, 4.2, 5.8, 6.6, 7.8, 8.6, 9.0, 7.8, 6.6, 4.6, 5.8, 5.8, 5.4, 5.8, 5.4, 3.0, 3.8, 4.2, 3.8, 4.2, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -33.1, -0.2, 5.4, -4.6, -5.4, -3.8, -0.2, -5.0, -2.6, -0.2, -0.6, 2.6, 3.8, 3.0, 3.8, 5.4, 3.8, 5.4, 5.0, 5.0, 4.6, 6.2, 7.0, 8.2, 9.0, 9.8, 8.2, 7.0, 5.0, 5.8, 5.8, 5.8, 5.8, 5.8, 3.0, 4.2, 4.2, 4.2, 4.6, 5.4, 5.4, 5.4, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 25.5, 17.5, -0.6, -2.6, -1.4, 1.8, -3.8, -1.4, 1.4, 0.6, 3.8, 5.0, 4.2, 4.6, 6.6, 4.6, 6.2, 5.8, 5.8, 5.0, 7.0, 7.4, 9.0, 9.8, 10.2, 9.0, 7.4, 5.4, 6.2, 6.2, 6.2, 6.2, 6.2, 3.4, 4.2, 4.6, 4.2, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 17.1, -5.4, -5.0, -2.6, 1.8, -4.6, -1.8, 1.0, 0.2, 3.8, 5.0, 4.2, 4.6, 6.6, 4.6, 6.6, 6.2, 5.8, 5.0, 7.0, 7.8, 9.0, 10.2, 10.6, 9.0, 7.4, 5.4, 6.2, 6.2, 6.2, 6.2, 6.2, 3.4, 4.2, 4.6, 4.2, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -12.7, -7.0, -2.6, 2.6, -5.4, -2.2, 1.4, 0.2, 4.2, 5.4, 4.2, 5.0, 7.0, 5.0, 6.6, 6.2, 6.2, 5.0, 7.0, 7.8, 9.4, 10.2, 10.6, 9.0, 7.4, 5.4, 6.2, 6.6, 6.2, 6.2, 6.2, 3.0, 4.2, 4.6, 4.2, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -1.0, 1.4, 5.8, -5.0, -1.4, 2.2, 0.6, 5.0, 6.2, 5.0, 5.4, 7.8, 5.4, 7.0, 6.6, 6.2, 5.4, 7.4, 8.2, 9.8, 10.6, 11.0, 9.4, 7.8, 5.4, 6.6, 6.6, 6.2, 6.6, 6.2, 3.0, 4.2, 4.6, 4.2, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 6.6, 9.4, -6.2, -1.4, 2.6, 1.0, 5.4, 7.0, 5.4, 5.8, 8.2, 5.4, 7.4, 6.6, 6.6, 5.4, 7.8, 8.6, 9.8, 11.0, 11.4, 9.8, 7.8, 5.4, 6.6, 6.6, 6.2, 6.6, 6.2, 3.0, 4.2, 4.6, 4.2, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 16.3, -8.6, -1.4, 3.8, 1.4, 6.6, 7.8, 6.2, 6.6, 9.0, 5.8, 7.8, 7.4, 7.0, 5.8, 8.2, 9.0, 10.6, 11.4, 11.8, 10.2, 8.2, 5.8, 6.6, 6.6, 6.2, 6.6, 6.2, 3.0, 4.2, 4.6, 4.2, 4.6, 5.8, 5.8, 5.8, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -13.5, 0.6, 7.0, 3.4, 9.0, 10.2, 7.8, 7.8, 10.6, 7.0, 9.0, 8.2, 7.8, 6.6, 9.0, 9.8, 11.4, 12.2, 12.7, 10.6, 8.6, 5.8, 7.0, 7.0, 6.6, 7.0, 6.6, 3.4, 4.2, 4.6, 4.2, 4.6, 5.8, 6.2, 5.8, 5.8, 6.6, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 13.5, 15.1, 7.0, 13.1, 13.5, 9.8, 9.8, 12.2, 8.2, 10.2, 9.0, 8.6, 7.0, 9.8, 10.6, 12.2, 13.1, 13.5, 11.4, 9.0, 6.2, 7.4, 7.4, 7.0, 7.0, 7.0, 3.4, 4.6, 5.0, 4.6, 5.0, 5.8, 6.2, 5.8, 6.2, 6.6, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 18.7, 6.2, 14.7, 14.7, 9.8, 9.8, 13.1, 7.8, 10.6, 9.0, 8.6, 7.0, 9.8, 10.6, 12.2, 13.5, 13.9, 11.4, 9.0, 6.2, 7.4, 7.4, 6.6, 7.0, 6.6, 3.0, 4.2, 4.6, 4.2, 4.6, 5.8, 6.2, 5.8, 5.8, 6.6, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 1.8, 16.3, 15.5, 9.4, 9.4, 13.1, 7.4, 10.2, 8.6, 8.2, 6.2, 9.4, 10.6, 12.2, 13.5, 13.9, 11.4, 8.6, 5.4, 7.0, 7.0, 6.6, 6.6, 6.6, 2.6, 3.8, 4.2, 3.8, 4.6, 5.4, 5.8, 5.4, 5.8, 6.2, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 26.7, 18.7, 9.8, 9.8, 13.9, 7.0, 10.6, 8.6, 8.2, 6.2, 9.4, 10.6, 12.7, 13.9, 14.3, 11.4, 8.6, 5.4, 6.6, 6.6, 6.2, 6.6, 6.2, 2.2, 3.4, 4.2, 3.8, 4.2, 5.4, 5.8, 5.4, 5.8, 6.2, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 16.3, 6.2, 8.2, 13.9, 5.8, 9.8, 8.2, 7.4, 5.4, 9.0, 10.2, 12.7, 13.9, 14.3, 11.4, 8.2, 4.6, 6.2, 6.2, 5.8, 6.2, 5.8, 1.8, 3.0, 3.8, 3.4, 3.8, 5.0, 5.4, 5.0, 5.4, 5.8, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 2.2, 7.4, 14.7, 4.6, 9.8, 7.4, 6.6, 4.6, 9.0, 10.2, 12.7, 14.3, 14.7, 11.0, 8.2, 4.2, 5.8, 6.2, 5.4, 5.8, 5.4, 1.0, 2.6, 3.4, 3.0, 3.4, 5.0, 5.0, 5.0, 5.0, 5.8, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 11.8, 19.5, 3.8, 10.6, 7.8, 6.6, 4.2, 9.0, 10.6, 13.1, 14.7, 15.1, 11.4, 8.2, 3.8, 5.8, 5.8, 5.4, 5.8, 5.4, 0.6, 2.2, 3.0, 2.6, 3.4, 4.6, 5.0, 4.6, 5.0, 5.8, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 25.5, 1.0, 10.6, 7.0, 6.2, 3.4, 9.0, 10.6, 13.5, 15.5, 15.5, 11.4, 7.8, 3.4, 5.4, 5.4, 5.0, 5.4, 5.0, 0.2, 1.8, 2.6, 2.2, 3.0, 4.6, 5.0, 4.6, 5.0, 5.4, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -7.8, 11.0, 6.6, 5.4, 2.6, 9.4, 11.0, 14.3, 16.3, 16.3, 11.8, 7.8, 2.6, 5.0, 5.4, 4.6, 5.4, 5.0, -0.6, 1.4, 2.2, 1.8, 2.6, 4.2, 4.6, 4.6, 4.6, 5.4, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 25.1, 9.8, 7.4, 3.0, 11.4, 12.7, 16.3, 17.9, 17.5, 12.2, 7.8, 2.6, 5.0, 5.4, 4.6, 5.4, 5.0, -1.0, 1.4, 2.2, 1.8, 2.6, 4.2, 4.6, 4.6, 4.6, 5.4, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 4.2, 5.0, 1.0, 11.8, 13.5, 17.1, 18.7, 18.7, 12.7, 7.4, 1.8, 4.6, 5.0, 4.2, 5.0, 4.6, -1.4, 1.0, 1.8, 1.4, 2.2, 4.2, 4.6, 4.2, 4.6, 5.4, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 6.6, 0.6, 14.7, 15.9, 19.5, 20.7, 19.9, 13.1, 7.4, 1.0, 4.2, 4.6, 3.8, 4.6, 4.2, -2.2, 0.6, 1.4, 1.0, 2.2, 4.2, 4.6, 4.2, 4.6, 5.4, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -1.4, 19.5, 18.7, 22.3, 23.1, 21.5, 13.1, 6.6, -0.2, 3.4, 3.8, 3.4, 4.2, 3.8, -3.0, -0.2, 1.0, 0.6, 1.8, 3.8, 4.2, 4.2, 4.2, 5.4, 4.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 33.5, 23.5, 25.9, 25.5, 23.1, 12.7, 5.4, -2.2, 2.6, 3.0, 2.6, 3.8, 3.4, -4.2, -0.6, 0.6, 0.2, 1.4, 3.4, 4.2, 3.8, 4.2, 5.0, 4.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 19.1, 25.1, 25.1, 22.3, 10.2, 2.6, -5.4, 0.6, 1.4, 1.0, 2.6, 2.2, -5.4, -1.8, -0.2, -0.6, 0.6, 3.0, 3.8, 3.4, 3.8, 5.0, 3.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 29.9, 27.1, 22.3, 7.8, -0.6, -8.6, -1.0, 0.2, 0.2, 1.8, 1.8, -6.6, -2.6, -1.0, -1.0, 0.2, 3.0, 3.8, 3.4, 3.8, 5.0, 3.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 26.3, 20.7, 3.0, -5.0, -12.7, -3.0, -1.0, -1.0, 1.4, 1.0, -7.8, -3.4, -1.4, -1.4, 0.2, 3.0, 3.4, 3.4, 3.8, 5.0, 3.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 18.3, -3.4, -10.2, -17.1, -4.6, -1.4, -1.4, 1.0, 1.0, -9.0, -3.8, -1.4, -1.8, -0.2, 3.0, 3.8, 3.4, 3.8, 5.0, 3.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -11.8, -14.7, -20.7, -4.2, -1.0, -1.0, 1.8, 1.4, -9.4, -3.8, -1.4, -1.8, 0.2, 3.4, 4.2, 3.8, 4.2, 5.4, 4.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -14.3, -21.5, -0.6, 2.2, 1.4, 3.8, 3.0, -9.4, -3.0, -0.6, -1.0, 0.6, 4.2, 5.0, 4.2, 5.0, 6.2, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -23.1, 6.6, 6.6, 3.8, 5.8, 4.6, -9.8, -2.6, -0.2, -0.6, 1.4, 5.0, 5.4, 5.0, 5.4, 6.6, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 25.5, 13.1, 6.6, 7.8, 5.4, -11.0, -2.6, 0.2, -0.6, 1.4, 5.4, 6.2, 5.4, 5.8, 7.0, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 7.8, 2.6, 6.2, 4.2, -14.7, -4.2, -0.6, -1.0, 1.4, 5.4, 6.2, 5.4, 5.8, 7.0, 5.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 1.4, 7.4, 3.8, -17.9, -5.0, -0.6, -1.0, 1.4, 6.2, 6.6, 5.8, 6.2, 7.8, 5.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 11.8, 4.6, -21.9, -5.0, 0.2, -0.6, 2.2, 7.4, 7.8, 6.6, 7.0, 8.2, 6.2], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 2.6, -29.1, -4.6, 1.4, 0.2, 3.4, 8.6, 9.0, 7.4, 7.8, 9.0, 7.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -38.8, 0.2, 5.8, 2.6, 5.4, 11.4, 11.0, 9.0, 9.0, 10.2, 7.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 26.3, 17.9, 8.2, 9.8, 15.5, 14.3, 11.0, 10.6, 11.8, 8.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 13.1, 3.0, 7.4, 15.1, 13.5, 9.8, 9.8, 11.4, 7.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -1.0, 7.4, 17.1, 14.7, 9.8, 9.8, 11.4, 7.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 13.5, 23.5, 17.5, 10.6, 10.2, 12.2, 7.4], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 30.3, 17.5, 9.0, 9.4, 11.8, 6.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 11.8, 4.2, 7.0, 11.0, 5.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 1.4, 7.0, 12.7, 4.6], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 10.2, 15.5, 3.8], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, 19.1, 1.0], [NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, NaN, -6.6]]; let zg = [] let text = []; for (let i = 0; i < x.length; i++) { let inner = []; zg[i] = [] for (let j = 0; j < x.length; j++) { inner.push('start: ' + x[i].toString() + '<br>current: ' + x[j].toString() + '<br>yield: ' + z[i][j].toString() + '%'); if (z[i][j] > 10) { zg[i][j] = 10; } else if (z[i][j] < -10) { zg[i][j] = -10; } else if (!isNaN(z[i][j])) { zg[i][j] = Math.round(z[i][j]); } else { zg[i][j] = NaN; } } text.push(inner); } let data = [{ x: x, y: x, z: zg, text: text, hoverinfo: 'text', colorscale: 'Jet', type: 'heatmap' }]; let layout = { title: 'Dollar-Cost Averaging the S&P 500<br>inflation-adjusted; dividends reinvested', xaxis: { title: 'target year' }, yaxis: { title: 'start year' }, margin: { t: 35, b: 40 } } Plotly.newPlot(plot, data, layout, { displayModeBar: false, responsive: true }) document.getElementById('plot').on('plotly_hover', function(data) { let xi = []; let yi = []; for (let i = data.points[0].pointIndex[0]; i < 92; i++) { xi.push(x[i]); yi.push(z[data.points[0].pointIndex[0]][i]); } let trace = { x: xi, y: yi, type: 'scattergl' }; let layout = { yaxis: { title: 'annual %yield so far' }, xaxis: { title: 'year' }, title: 'Starting year: ' + data.points[0].y, margin: { t: 45, b: 40 } } Plotly.react('hover', [trace], layout, { displayModeBar: false, responsive: true }); }); </script> <br /><h4>Methodology</h4><div>I got yields from <a href="http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/histretSP.html">here</a> and inflation from <a href="https://inflationdata.com/Inflation/Consumer_Price_Index/HistoricalCPI.aspx?reloaded=true">here</a>. To calculate the values in the plot, I used the following algorithm:</div><div><ol><li>start at 1928 with $100</li><li>multiply by 1928's yield</li><li>add an inflation-adjusted $100</li><li>calculate the rate that yields that future value with number of years since start and the annual investment</li><li>multiply by 1929's yield</li><li>add an inflation-adjusted $100</li><li>calculate the rate that yields that future value with number of years since start and the annual investment</li><li>repeat 4-7 until you hit 2018</li><li>start at 1929 with $100</li><li>repeat 2-8</li><li>repeat for all starting years up through 2018</li></ol><div>The value in the plot is just the rates from steps like #4. Those values are the approximate annual yield of the investment factoring in <a href="https://www.investopedia.com/terms/d/dollarcostaveraging.asp">dollar-cost averaging</a>, inflation, and dividends.<br /><br />If you want to read more on the math for this sort of thing, check out these:<br /><ul><li><a href="http://www.somesolvedproblems.com/2017/12/how-long-does-it-take-to-double-your.html">link</a></li><li><a href="http://www.somesolvedproblems.com/2018/06/what-are-actual-yields-from-investing.html">link</a></li><li><a href="http://www.somesolvedproblems.com/2017/12/how-much-should-you-invest-to-create.html">link</a></li></ul><div><br /></div></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-11417699168102078942019-04-04T22:02:00.001-07:002019-04-04T22:07:49.434-07:00Which Planet is Closest to Earth on Average?I realized I didn't actually know which planet is closet to Earth. I'd guess Mercury since its orbital period is so small, but I looked at the data and summarized some results here.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-BJKRVLD-x34/XKbiND9BbCI/AAAAAAAAE4U/1T0wQLZB1E8LFsEnJa_jFOSNz9DfcN5AwCLcBGAs/s1600/planet%2Bdistances.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="815" data-original-width="1600" height="auto" src="https://3.bp.blogspot.com/-BJKRVLD-x34/XKbiND9BbCI/AAAAAAAAE4U/1T0wQLZB1E8LFsEnJa_jFOSNz9DfcN5AwCLcBGAs/s1600/planet%2Bdistances.png" width="0%" /></a></div><a name='more'></a><h4>Data source</h4><div>I used the Horizons data from NASA. <a href="https://ssd.jpl.nasa.gov/horizons.cgi">You can find it here</a>.</div><div><br /></div><div>I pulled 7500 days of data, and calculated the distance at each time between Earth and Mercury, Earth and Venus, and Earth and Mars.<br /><br /></div><h4>Results</h4><div>I made a plot summarizing the results:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-BJKRVLD-x34/XKbiND9BbCI/AAAAAAAAE4U/1T0wQLZB1E8LFsEnJa_jFOSNz9DfcN5AwCLcBGAs/s1600/planet%2Bdistances.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="815" data-original-width="1600" height="auto" src="https://3.bp.blogspot.com/-BJKRVLD-x34/XKbiND9BbCI/AAAAAAAAE4U/1T0wQLZB1E8LFsEnJa_jFOSNz9DfcN5AwCLcBGAs/s1600/planet%2Bdistances.png" width="95%" /></a></div><div><br /></div><div>The color bars at the bottom are the color of the closest planet at that time (blue for Mercury, yellow for Venus, and red for Mars).</div><div><br /></div><div>My hunch on Mercury appears correct, but it's close between Mercury and Venus. One of these planets is always closest since the largest gap between Mercury and the Earth is smaller than the smallest gap between Earth and Jupiter.</div><div><br /></div><div>This might be a bit suprising to some because the distance from Venus to the Sun is closest to the distance from Earth to the Sun, and Venus does come closer to the Earth than any other planet. If you trace out orbits though, you'll see that that doesn't guarantee the smallest average distance because Venus can end up on the other side of the Sun for an extended period compared with Mercury.</div><div><br /></div><div>I'll try to put together a gif showing this at some point.<br /><br /><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-52602595285827728252019-03-31T11:00:00.000-07:002019-03-31T12:27:32.282-07:00What Was the (Statistically) Worst NFL MVP Season of All Time?In analyzing MVP winners, I found a few that seem off based on their stats that year.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-w0hECNaVKUc/XKETzb1TNMI/AAAAAAAAE3o/8DOxlKYKAu8-vktSDJ-4EYjz3FDqLIFPQCLcBGAs/s1600/summary%2Btable.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="832" data-original-width="996" height="auto" src="https://4.bp.blogspot.com/-w0hECNaVKUc/XKETzb1TNMI/AAAAAAAAE3o/8DOxlKYKAu8-vktSDJ-4EYjz3FDqLIFPQCLcBGAs/s1600/summary%2Btable.png" width="0%" /></a></div><a name='more'></a><h4>Statistical Outlier Score</h4><div>I looked at this a few different ways. The first was to <a href="http://www.somesolvedproblems.com/p/sports.html">use the score described here</a>. Using that, here are the winners each year that a QB or RB won the MVP and the league had at least 14 games. The number in parentheses is the score described in the link referenced above.<br /><br /></div><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>Year</th><th>Winner</th><th>Best QB</th><th>Best QB or RB</th></tr><tr><td>2018</td><td>Patrick Mahomes (1.64)</td><td>Patrick Mahomes (1.64)</td><td>Patrick Mahomes (1.64)</td></tr><tr><td>2017</td><td>Tom Brady (0.98)</td><td>Alex Smith (1.12)</td><td>Todd Gurley (1.70)</td></tr><tr><td>2016</td><td>Matt Ryan (1.74)</td><td>Matt Ryan (1.74)</td><td>Matt Ryan (1.74)</td></tr><tr><td>2015</td><td>Cam Newton (0.94)</td><td>Russell Wilson (1.10)</td><td>Russell Wilson (1.10)</td></tr><tr><td>2014</td><td>Aaron Rodgers (1.57)</td><td>Aaron Rodgers (1.57)</td><td>Aaron Rodgers (1.57)</td></tr><tr><td>2013</td><td>Peyton Manning (1.83)</td><td>Peyton Manning (1.83)</td><td>Peyton Manning (1.83)</td></tr><tr><td>2012</td><td>Adrian Peterson (2.02)</td><td>Aaron Rodgers (1.44)</td><td>Adrian Peterson (2.02)</td></tr><tr><td>2011</td><td>Aaron Rodgers (2.11)</td><td>Aaron Rodgers (2.11)</td><td>Aaron Rodgers (2.11)</td></tr><tr><td>2010</td><td>Tom Brady (1.40)</td><td>Tom Brady (1.40)</td><td>Arian Foster (1.53)</td></tr><tr><td>2009</td><td>Peyton Manning (0.69)</td><td>Drew Brees (1.27)</td><td>Chris Johnson (1.84)</td></tr><tr><td>2008</td><td>Peyton Manning (0.54)</td><td>Philip Rivers (1.35)</td><td>DeAngelo Williams (1.81)</td></tr><tr><td>2007</td><td>Tom Brady (2.08)</td><td>Tom Brady (2.08)</td><td>Tom Brady (2.08)</td></tr><tr><td>2006</td><td>LaDainian Tomlinson (2.16)</td><td>Peyton Manning (1.66)</td><td>LaDainian Tomlinson (2.16)</td></tr><tr><td>2005</td><td>Shaun Alexander (1.75)</td><td>Peyton Manning (1.20)</td><td>Shaun Alexander (1.75)</td></tr><tr><td>2004</td><td>Peyton Manning (2.02)</td><td>Peyton Manning (2.02)</td><td>Peyton Manning (2.02)</td></tr><tr><td>2003</td><td>Peyton Manning(1.22)<br />Steve McNair(1.36)</td><td>Steve McNair (1.36)</td><td>Steve McNair (1.36)</td></tr><tr><td>2002</td><td>Rich Gannon (1.10)</td><td>Rich Gannon (1.10)</td><td>Priest Holmes (2.14)</td></tr><tr><td>2001</td><td>Kurt Warner (1.35)</td><td>Kurt Warner (1.35)</td><td>Marshall Faulk (2.27)</td></tr><tr><td>2000</td><td>Marshall Faulk (2.31)</td><td>Jeff Garcia (1.20)</td><td>Marshall Faulk (2.31)</td></tr><tr><td>1999</td><td>Kurt Warner (1.89)</td><td>Kurt Warner (1.89)</td><td>Kurt Warner (1.89)</td></tr><tr><td>1998</td><td>Terrell Davis (2.34)</td><td>Randall Cunningham (1.38)</td><td>Terrell Davis (2.34)</td></tr><tr><td>1997</td><td>Brett Favre(1.12)<br />Barry Sanders(2.28)</td><td>Brett Favre (1.12)</td><td>Barry Sanders (2.28)</td></tr><tr><td>1996</td><td>Brett Favre (1.42)</td><td>Brett Favre (1.42)</td><td>Brett Favre (1.42)</td></tr><tr><td>1995</td><td>Brett Favre (1.29)</td><td>Brett Favre (1.29)</td><td>Emmitt Smith (2.29)</td></tr><tr><td>1994</td><td>Steve Young (2.25)</td><td>Steve Young (2.25)</td><td>Steve Young (2.25)</td></tr><tr><td>1993</td><td>Emmitt Smith (1.87)</td><td>Steve Young (1.51)</td><td>Emmitt Smith (1.87)</td></tr><tr><td>1992</td><td>Steve Young (2.05)</td><td>Steve Young (2.05)</td><td>Steve Young (2.05)</td></tr><tr><td>1991</td><td>Thurman Thomas (1.57)</td><td>Mark Rypien (1.20)</td><td>Barry Sanders (1.66)</td></tr><tr><td>1990</td><td>Joe Montana (0.69)</td><td>Warren Moon (1.33)</td><td>Barry Sanders (1.62)</td></tr><tr><td>1988</td><td>Boomer Esiason (1.44)</td><td>Boomer Esiason (1.44)</td><td>Boomer Esiason (1.44)</td></tr><tr><td>1985</td><td>Marcus Allen (1.12)</td><td>Boomer Esiason (1.30)</td><td>Boomer Esiason (1.30)</td></tr><tr><td>1984</td><td>Dan Marino (1.96)</td><td>Dan Marino (1.96)</td><td>Dan Marino (1.96)</td></tr><tr><td>1983</td><td>Joe Theismann (1.20)</td><td>Joe Theismann (1.20)</td><td>Eric Dickerson (1.28)</td></tr><tr><td>1981</td><td>Ken Anderson (1.37)</td><td>Ken Anderson (1.37)</td><td>Ken Anderson (1.37)</td></tr><tr><td>1980</td><td>Brian Sipe (1.15)</td><td>Brian Sipe (1.15)</td><td>Earl Campbell (2.12)</td></tr><tr><td>1979</td><td>Earl Campbell (1.22)</td><td>Roger Staubach (1.37)</td><td>Roger Staubach (1.37)</td></tr><tr><td>1978</td><td>Terry Bradshaw (1.18)</td><td>Roger Staubach (1.32)</td><td>Roger Staubach (1.32)</td></tr><tr><td>1977</td><td>Walter Payton (2.55)</td><td>Bob Griese (0.88)</td><td>Walter Payton (2.55)</td></tr><tr><td>1976</td><td>Bert Jones (1.63)</td><td>Bert Jones (1.63)</td><td>Bert Jones (1.63)</td></tr><tr><td>1975</td><td>Fran Tarkenton (1.05)</td><td>Fran Tarkenton (1.05)</td><td>O.J. Simpson (2.59)</td></tr><tr><td>1974</td><td>Ken Stabler (1.42)</td><td>Ken Stabler (1.42)</td><td>Otis Armstrong (1.65)</td></tr><tr><td>1973</td><td>O.J. Simpson (2.17)</td><td>Roger Staubach (1.21)</td><td>O.J. Simpson (2.17)</td></tr><tr><td>1970</td><td>John Brodie (1.47)</td><td>John Brodie (1.47)</td><td>John Brodie (1.47)</td></tr></tbody></table><div><br />Most years, the results look about right. There are a few that seem way off though. Digging into one of them, compare Rivers and Manning in 2008.<br /><ul></ul><div>2008:</div><div><ul><li>Peyton Manning: 250 yd/g, 1.7 td/g, 0.75 int/g, 7.2 yd/a, 95.0 rating</li><li>Phillip Rivers: 251 yd/g, 2.1 td/g, 0.68 int/g, 8.4 yd/a, 105.5 rating</li></ul><div>Rivers outperformed him in every statistical category. Rivers' team also went further in the postseason than Manning's did that season and did so by beating Manning's team.</div></div><div><br /></div><h4>Regression Analysis</h4><div>Another way I looked at this is by making a model to predict winners, and seeing which winner was least likely to win. The model is logistic regression using the following factors:<br /><ul><li>score: outlier score defined above</li><li>QB: 1 if you're a QB and 0 if you're an RB</li></ul><div>That's it. I used all years since 1970. I dropped all years with any of the following:<br /><ul><li>multiple MVP winners</li><li>fewer than 14 games</li><li>a non-RB or QB winner</li></ul>The pseudo-r^2 is 0.50.<br /><br />For a given season, the total probability of winning has to be 1 for all players, so I just divided each player's score in each season by the sum of scores in that season to normalize it. As an extreme example, if two players have an individual score of 90% and the rest of the league has a combined score of 0%, those two players effectively each had a 50% chance of winning in that season.<br /><br />Using this to generate % chance of winning for each season, I get the following as the 5 most obvious MVP winners and 5 'luckiest' MVP winners (won with the lowest % chance of winning). I've also added 'unluckiest' which is the list of players with the highest chance that ended up not winning that season.<br /><br />Obvious:</div></div><div><ol><li>1976, Bert Jones, 85% chance</li><li>1977, Walter Payton 83% chance</li><li>1992, Steve Young, 78% chance</li><li>2007, Tom Brady, 78% chance</li><li>2016, Matt Ryan, 75% chance</li></ol><div>Luckiest:</div></div><div><ol><li>1985, Marcus Allen, 1.4% chance</li><li>2008, Peyton Manning, 1.8% chance</li><li>2009, Peyton Manning, 2.3% chance</li><li>1990, Joe Montana, 3.0% chance</li><li>1979, Earl Campbell, 3.1% chance</li></ol><div>Unluckiest:</div></div><div><ol><li>1975, OJ Simpson, 77% chance</li><li>1979, Roger Staubach, 64% chance</li><li>1995, Emmitt Smith, 58% chance</li><li>2002, Priest Holmes, 55% chance</li><li>1993, Steve young, 54% chance</li></ol><div>This model is extremely crude, but it works pretty well to spot outliers. I previously walked through Manning's 2008 season. Manning's 2009 was similar in that Brees beat him soundly statistically and their teams performed similarly. Montana's 1990 was interesting in that Moon outperformed him statistically but was on a much worse team. 1985 and 1979 were interesting in that an RB won the MVP with a worse overall statistical performance than the best QB. At least in recent history, QB's have had an advantage in MVP voting. This might be due to mixing time periods. 40 years ago, RB's were more valued, so by mixing old and new data I likely made the model inaccurate for really old or really new data.<br /><br /><h4>Conclusion</h4></div></div></div><div>Looking at only the statistical data and removing everything you pick up from watching the games (that is important but I can't capture here), I think Peyton Manning's 2008 season was the least deserving MVP season in the last 30 years. Statistically, Phillip Rivers, DeAngelo Williams, Drew Brees, Aaron Rodgers, and Kurt Warner all had more impressive 2008 seasons. Manning's statistical score of 0.54 means that he was only half a standard deviation above the average starting QB that season which is the lowest ever for an MVP winner.<br /><br /><br /><br /><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-23396410692656010612019-03-16T21:25:00.000-07:002019-03-16T21:25:53.013-07:00Simple Algorithm for Classifying CurvesI needed to classify curves recently and the tutorials I found for classification algorithms all used single points instead of curves. I settled on something crude but effective for my use case.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-gbkCTIck6Ik/XI3Fb1HsL5I/AAAAAAAAE2I/x53SF4EufyIuNJQNeyvRvnagav9AlPfwwCLcBGAs/s1600/default%2Bplot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="712" data-original-width="1427" height="auto" src="https://3.bp.blogspot.com/-gbkCTIck6Ik/XI3Fb1HsL5I/AAAAAAAAE2I/x53SF4EufyIuNJQNeyvRvnagav9AlPfwwCLcBGAs/s1600/default%2Bplot.png" width="0%" /></a></div><a name='more'></a><h4>Assumptions about the data</h4><div><ul><li>All data are scaled the same way...they all overlap, and no normalization and/or rotations are required</li><li>All data have the same x values</li></ul><div>The specific use case was categorizing results from repeated tests on the same family of hardware. Each test run is exactly the same parameters, but the results vary from noise, operator mistakes, etc.<br /><br /></div><h4>Basic algorithm</h4></div><div><ul><li>Make a training set of curves</li><li>Classify each one in the set</li><li>Get the average value of the good curves at each x value</li><li>Take curve 1 from your training set</li><li>At each x, get the distance from curve 1 to the average of the good curves</li><li>Repeat for all curves, and store those distances by x value</li><li>Take curve 1 from your test set</li><li>At each x, get distance the from curve 1 to the average of the good curves</li><li>Compare this with the distance at that x value from each curve in the training set</li><li>Sum the errors across all x values for each curve in the training set</li><li>The classification of curves that have the smallest error here are the classification of the test curve if the distance is under some threshold</li></ul><div>This is very similar to just comparing a test curve with each curve in the training set and finding the one with the smallest error. The only difference is that by comparing errors off of average, you get the same result for +1 noise and -1 noise.<br /><br />This also allows for finer tuning of the algorithm. An obvious thing that you might want to do is say 'all points within x distance from the good average are ok', so you can set an offset above and below the good average and take distance from there.<br /><br /></div><h4>Simple example</h4></div><div>To show this in action, I made a training set of 100 roughly sinusoidal curves. You can play with the code here:<br /><br /><a href="https://colab.research.google.com/drive/1WDbzjyYgVDNcxPj_UEPliD6G4NVyO4pX">https://colab.research.google.com/drive/1WDbzjyYgVDNcxPj_UEPliD6G4NVyO4pX</a><br /><br />It has 25 curves per type and 4 different types:</div><div><ul><li>good/category 0: sin(x/10) + gaussian noise</li><li>bad/category 1: sin(x/10) + sin(x/8) + gaussian noise</li><li>bad/category 2: round(sin(x/10) + gaussian noise</li><li>bad category 3: 0.25*sin(x/10) + gaussian noise</li></ul><div>A sample training data set looks like:</div></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-gbkCTIck6Ik/XI3Fb1HsL5I/AAAAAAAAE2I/x53SF4EufyIuNJQNeyvRvnagav9AlPfwwCLcBGAs/s1600/default%2Bplot.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="712" data-original-width="1427" height="auto" src="https://3.bp.blogspot.com/-gbkCTIck6Ik/XI3Fb1HsL5I/AAAAAAAAE2I/x53SF4EufyIuNJQNeyvRvnagav9AlPfwwCLcBGAs/s1600/default%2Bplot.png" width="80%" /></a></div><div><br />From there, I just followed the algorithm above. To determine a match, I used the closest 12 training curves. If more than 6 of the matches are of the same category, that's the apparent category of the test curve. The % of matches from that category is the confidence.<br /><br /></div><h4>Speeding it up</h4><div>This is a slow algorithm. For L training curves, M test curves, and N points per curve, the algorithm is O(LMN). That blows up quickly. There are many ways to make this faster. The ones most likely to apply for the specific type of data I was working with are:<br /><ul><li>Exclude some ranges. For example, the signal that matters might be from 5,000 < x < 15,000, but data were collected from 0 to 50,000. Restricting the algorithm to 5,000 < x < 15,000 would reduce N and speed things up.</li><li>Downsample the data. Simply take every 2nd, 5th, 10th, or whatever point from each curve to compare.</li><li>Reduce the size of the training set. Do 20 curves per category for example.</li><li>Stop early. If you know that a 'good' curve typically has a total error of < 50 and your test curve error is 22, you probably don't need to continue.</li><li>Look for max errors. If the test curve's total error is 50, and two of the 1,000 x values gave an error of 48 combined, quickly scan to see if a category typically fails at those 2 x values. Consider also just testing x values that identify a category. If category 1 always has large errors at x = 5 and x = 37 and nowhere else, there's not much point in comparing the test curve with category 1 training curves at any x values other than 5 and 37.</li></ul><div><br /></div><h4>Conclusion</h4><div>So this was really crude but it worked pretty well for my use case and I hadn't seen it laid out anywhere else. Hopefully this helps someone.</div><div><br /></div><div><br /></div><div><br /></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com2tag:blogger.com,1999:blog-1532419805701836386.post-46411792763675525072019-03-09T22:52:00.003-08:002019-03-17T20:36:42.130-07:00Which Current Quarterbacks Will Make the Hall of Fame?I took a stab at predicting which current and recent quarterbacks will make the Pro Football Hall of Fame.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-GuQokhBDUdc/XIWm6JviK5I/AAAAAAAAE1o/DshkIzWyGtoQ5CdTaOIVJ0-e2tDrss8bACLcBGAs/s1600/table%2Bimage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="778" data-original-width="789" height="auto" src="https://3.bp.blogspot.com/-GuQokhBDUdc/XIWm6JviK5I/AAAAAAAAE1o/DshkIzWyGtoQ5CdTaOIVJ0-e2tDrss8bACLcBGAs/s900/table%2Bimage.png" width="0%" /></a></div><a name='more'></a>I've listed the top quarterbacks according to my scoring system and color-coded by Hall of Fame membership status.<br /><br /><div style="display: grid; grid-template-columns: 1fr 3fr; margin-left: auto; margin-right: auto; width: 325px;"><div style="background-color: #bbeebb; grid-columns: 1/2; height: 25px; width: 50px;"></div><div>= Already in </div><div style="background-color: #bbbbee; grid-columns: 1/2; height: 25px; width: 50px;"></div><div>= Not yet eligible </div><div style="background-color: #eeeeee; grid-columns: 1/2; height: 25px; width: 50px;"></div><div>= Eligible for less than 5 years </div><div style="background-color: #eebbbb; grid-columns: 1/2; height: 25px; width: 50px;"></div><div>= Eligible for more than 5 years and not in </div></div><br /><div><table style="margin-left: auto; margin-right: auto;"><tbody><tr><th>Name</th><th>Stats</th><th>Success</th><th>Awards</th><th>Consistency</th><th>Total</th></tr><tr style="background-color: #bbbbee;"><td>Tom Brady</td><td>1.34</td><td>0.50</td><td>0.16</td><td>0.21</td><td>2.22</td></tr><tr style="background-color: #bbbbee;"><td>Peyton Manning</td><td>1.45</td><td>0.24</td><td>0.25</td><td>0.26</td><td>2.19</td></tr><tr style="background-color: #bbeebb;"><td>Steve Young</td><td>1.53</td><td>0.25</td><td>0.18</td><td>0.18</td><td>2.15</td></tr><tr style="background-color: #bbeebb;"><td>Joe Montana</td><td>1.12</td><td>0.42</td><td>0.16</td><td>0.19</td><td>1.89</td></tr><tr style="background-color: #bbbbee;"><td>Aaron Rodgers</td><td>1.28</td><td>0.26</td><td>0.12</td><td>0.17</td><td>1.82</td></tr><tr style="background-color: #bbeebb;"><td>Roger Staubach</td><td>1.06</td><td>0.39</td><td>0.10</td><td>0.14</td><td>1.70</td></tr><tr style="background-color: #bbeebb;"><td>Brett Favre</td><td>1.03</td><td>0.20</td><td>0.18</td><td>0.17</td><td>1.57</td></tr><tr style="background-color: #bbbbee;"><td>Drew Brees</td><td>1.08</td><td>0.13</td><td>0.14</td><td>0.21</td><td>1.57</td></tr><tr style="background-color: #bbeebb;"><td>Dan Marino</td><td>1.04</td><td>0.14</td><td>0.18</td><td>0.20</td><td>1.56</td></tr><tr style="background-color: #bbeebb;"><td>Dan Fouts</td><td>1.00</td><td>0.07</td><td>0.13</td><td>0.14</td><td>1.34</td></tr><tr style="background-color: #eebbbb;"><td>Ken Anderson</td><td>0.93</td><td>0.04</td><td>0.10</td><td>0.12</td><td>1.20</td></tr><tr style="background-color: #bbeebb;"><td>Kurt Warner</td><td>0.70</td><td>0.32</td><td>0.09</td><td>0.09</td><td>1.19</td></tr><tr style="background-color: #bbbbee;"><td>Ben Roethlisberger</td><td>0.72</td><td>0.25</td><td>0.04</td><td>0.12</td><td>1.13</td></tr><tr style="background-color: #bbeebb;"><td>John Elway</td><td>0.65</td><td>0.25</td><td>0.09</td><td>0.12</td><td>1.11</td></tr><tr style="background-color: #bbeebb;"><td>Terry Bradshaw</td><td>0.65</td><td>0.29</td><td>0.05</td><td>0.11</td><td>1.09</td></tr><tr style="background-color: #bbbbee;"><td>Philip Rivers</td><td>0.82</td><td>0.11</td><td>0.05</td><td>0.11</td><td>1.09</td></tr><tr style="background-color: #bbeebb;"><td>Fran Tarkenton</td><td>0.70</td><td>0.19</td><td>0.06</td><td>0.11</td><td>1.06</td></tr><tr style="background-color: #bbbbee;"><td>Russell Wilson</td><td>0.52</td><td>0.33</td><td>0.02</td><td>0.09</td><td>0.96</td></tr><tr style="background-color: #eeeeee;"><td>Donovan McNabb</td><td>0.61</td><td>0.23</td><td>0.04</td><td>0.07</td><td>0.95</td></tr><tr style="background-color: #bbeebb;"><td>Jim Kelly</td><td>0.56</td><td>0.23</td><td>0.08</td><td>0.07</td><td>0.94</td></tr><tr style="background-color: #bbeebb;"><td>Warren Moon</td><td>0.69</td><td>0.06</td><td>0.06</td><td>0.12</td><td>0.93</td></tr><tr style="background-color: #bbeebb;"><td>Bob Griese</td><td>0.56</td><td>0.16</td><td>0.10</td><td>0.09</td><td>0.90</td></tr><tr style="background-color: #bbeebb;"><td>Ken Stabler</td><td>0.51</td><td>0.20</td><td>0.10</td><td>0.07</td><td>0.88</td></tr><tr style="background-color: #eebbbb;"><td>Boomer Esiason</td><td>0.65</td><td>0.08</td><td>0.06</td><td>0.09</td><td>0.87</td></tr><tr style="background-color: #bbbbee;"><td>Tony Romo</td><td>0.61</td><td>0.07</td><td>0.04</td><td>0.10</td><td>0.82</td></tr><tr style="background-color: #bbbbee;"><td>Matt Ryan</td><td>0.58</td><td>0.10</td><td>0.05</td><td>0.07</td><td>0.80</td></tr><tr style="background-color: #eebbbb;"><td>Randall Cunningham</td><td>0.56</td><td>0.08</td><td>0.07</td><td>0.08</td><td>0.80</td></tr><tr style="background-color: #bbeebb;"><td>Troy Aikman</td><td>0.31</td><td>0.26</td><td>0.04</td><td>0.06</td><td>0.67</td></tr><tr style="background-color: #eebbbb;"><td>Rich Gannon</td><td>0.37</td><td>0.14</td><td>0.09</td><td>0.07</td><td>0.67</td></tr><tr style="background-color: #eebbbb;"><td>Joe Theismann</td><td>0.30</td><td>0.21</td><td>0.04</td><td>0.07</td><td>0.63</td></tr><tr style="background-color: #eebbbb;"><td>Steve McNair</td><td>0.29</td><td>0.16</td><td>0.04</td><td>0.05</td><td>0.54</td></tr><tr style="background-color: #eebbbb;"><td>Craig Morton</td><td>0.30</td><td>0.16</td><td>0.00</td><td>0.06</td><td>0.52</td></tr><tr style="background-color: #eebbbb;"><td>Mark Brunell</td><td>0.29</td><td>0.13</td><td>0.02</td><td>0.06</td><td>0.50</td></tr><tr style="background-color: #eebbbb;"><td>Jeff Garcia</td><td>0.38</td><td>0.04</td><td>0.03</td><td>0.05</td><td>0.49</td></tr><tr style="background-color: #eebbbb;"><td>Jim Plunkett</td><td>0.19</td><td>0.25</td><td>0.00</td><td>0.02</td><td>0.47</td></tr><tr style="background-color: #eebbbb;"><td>Jim Everett</td><td>0.32</td><td>0.06</td><td>0.01</td><td>0.07</td><td>0.45</td></tr><tr style="background-color: #eebbbb;"><td>Jim Hart</td><td>0.30</td><td>0.00</td><td>0.07</td><td>0.05</td><td>0.42</td></tr><tr style="background-color: #eebbbb;"><td>Vinny Testaverde</td><td>0.29</td><td>0.04</td><td>0.01</td><td>0.06</td><td>0.40</td></tr><tr style="background-color: #eebbbb;"><td>Phil Simms</td><td>0.20</td><td>0.16</td><td>0.01</td><td>0.03</td><td>0.40</td></tr><tr style="background-color: #eebbbb;"><td>Neil Lomax</td><td>0.32</td><td>0.00</td><td>0.01</td><td>0.06</td><td>0.40</td></tr><tr style="background-color: #eebbbb;"><td>Billy Kilmer</td><td>0.22</td><td>0.08</td><td>0.04</td><td>0.06</td><td>0.39</td></tr><tr style="background-color: #bbbbee;"><td>Carson Palmer</td><td>0.27</td><td>0.03</td><td>0.04</td><td>0.06</td><td>0.39</td></tr><tr style="background-color: #eeeeee;"><td>Matt Hasselbeck</td><td>0.13</td><td>0.14</td><td>0.02</td><td>0.04</td><td>0.34</td></tr><tr style="background-color: #eebbbb;"><td>Steve Bartkowski</td><td>0.21</td><td>0.04</td><td>0.01</td><td>0.04</td><td>0.31</td></tr><tr style="background-color: #eebbbb;"><td>Drew Bledsoe</td><td>0.15</td><td>0.07</td><td>0.03</td><td>0.04</td><td>0.29</td></tr><tr style="background-color: #bbbbee;"><td>Eli Manning</td><td>0.09</td><td>0.16</td><td>0.01</td><td>0.01</td><td>0.27</td></tr><tr style="background-color: #eebbbb;"><td>Jeff George</td><td>0.18</td><td>0.04</td><td>0.00</td><td>0.05</td><td>0.26</td></tr><tr style="background-color: #eebbbb;"><td>Ron Jaworski</td><td>0.13</td><td>0.09</td><td>0.01</td><td>0.02</td><td>0.24</td></tr><tr style="background-color: #eebbbb;"><td>Dave Krieg</td><td>0.14</td><td>0.03</td><td>0.02</td><td>0.03</td><td>0.22</td></tr><tr style="background-color: #eebbbb;"><td>Steve Grogan</td><td>0.18</td><td>0.00</td><td>0.00</td><td>0.04</td><td>0.21</td></tr><tr style="background-color: #bbbbee;"><td>Cam Newton</td><td>0.01</td><td>0.11</td><td>0.04</td><td>0.03</td><td>0.18</td></tr><tr style="background-color: #bbbbee;"><td>Matthew Stafford</td><td>0.12</td><td>0.00</td><td>0.00</td><td>0.04</td><td>0.16</td></tr><tr style="background-color: #eebbbb;"><td>Brad Johnson</td><td>-0.08</td><td>0.14</td><td>0.01</td><td>0.04</td><td>0.12</td></tr><tr style="background-color: #eebbbb;"><td>Ken O'Brien</td><td>0.05</td><td>0.00</td><td>0.01</td><td>0.03</td><td>0.09</td></tr><tr style="background-color: #eebbbb;"><td>Steve DeBerg</td><td>0.04</td><td>0.03</td><td>0.00</td><td>0.02</td><td>0.09</td></tr><tr style="background-color: #eebbbb;"><td>Chris Chandler</td><td>-0.03</td><td>0.06</td><td>0.01</td><td>0.04</td><td>0.09</td></tr><tr style="background-color: #bbbbee;"><td>Alex Smith</td><td>-0.01</td><td>0.05</td><td>0.01</td><td>0.02</td><td>0.07</td></tr><tr style="background-color: #eebbbb;"><td>Archie Manning</td><td>0.01</td><td>0.00</td><td>0.01</td><td>0.03</td><td>0.05</td></tr><tr style="background-color: #eebbbb;"><td>Joe Ferguson</td><td>-0.01</td><td>0.03</td><td>0.00</td><td>0.02</td><td>0.04</td></tr><tr style="background-color: #eebbbb;"><td>Jake Plummer</td><td>-0.08</td><td>0.06</td><td>0.01</td><td>0.03</td><td>0.01</td></tr><tr style="background-color: #bbbbee;"><td>Joe Flacco</td><td>-0.29</td><td>0.26</td><td>0.00</td><td>0.00</td><td>-0.03</td></tr><tr style="background-color: #eebbbb;"><td>Lynn Dickey</td><td>-0.11</td><td>0.04</td><td>0.00</td><td>0.03</td><td>-0.04</td></tr><tr style="background-color: #eebbbb;"><td>Kerry Collins</td><td>-0.16</td><td>0.08</td><td>0.01</td><td>0.01</td><td>-0.06</td></tr><tr style="background-color: #eebbbb;"><td>Jim Zorn</td><td>-0.11</td><td>0.00</td><td>0.00</td><td>0.03</td><td>-0.08</td></tr><tr style="background-color: #eebbbb;"><td>Jeff Blake</td><td>-0.14</td><td>0.00</td><td>0.01</td><td>0.02</td><td>-0.11</td></tr><tr style="background-color: #eebbbb;"><td>Richard Todd</td><td>-0.22</td><td>0.08</td><td>0.00</td><td>0.02</td><td>-0.12</td></tr><tr style="background-color: #bbbbee;"><td>Andy Dalton</td><td>-0.17</td><td>0.00</td><td>0.01</td><td>0.02</td><td>-0.14</td></tr><tr style="background-color: #eebbbb;"><td>Jim Harbaugh</td><td>-0.27</td><td>0.06</td><td>0.01</td><td>0.02</td><td>-0.19</td></tr><tr style="background-color: #bbbbee;"><td>Jay Cutler</td><td>-0.27</td><td>0.03</td><td>0.01</td><td>0.00</td><td>-0.24</td></tr><tr style="background-color: #eebbbb;"><td>Dan Pastorini</td><td>-0.39</td><td>0.11</td><td>0.01</td><td>0.00</td><td>-0.27</td></tr><tr style="background-color: #eebbbb;"><td>Jon Kitna</td><td>-0.43</td><td>0.00</td><td>0.00</td><td>0.01</td><td>-0.43</td></tr></tbody></table></div><div><br />They map to each other quite well.<br /><br />Using these scores, I did some basic logistic regression to make predictions. I used all players that retired before 2010 as the training set and assigned a score of 1 for 'in' and 0 for 'not in'. I used all other players as the test data. The pseudo r^2 is ~0.8.<br /><br />To make predictions now, I'll use tiers:</div><div><ul><li>lock - these guys will almost certainly get in</li><li>maybe - decent shot but no guarantee they get in</li><li>no - seems really unlikely they'll get in</li></ul><h4></h4><h4>Lock</h4></div><div>Scores above 1.2 define this tier. All QBs in the past that were in this range easily made it in. The active and recently retired QBs in this range are:</div><div><ul><li>Tom Brady</li><li>Peyton Manning</li><li>Aaron Rodgers</li><li>Drew Brees</li></ul><div>I would be surprised if those 4 don't make it in at some point barring a massive controversy. Based on the regression results, Drew Brees has the lowest odds of getting in from this category, but he should be a lock for getting in eventually.<br /><br /></div></div><h4>Maybe</h4><div>Scores above 0.85 define this tier. Most QBs in this tier make it, but it's not a guarantee. Ken Anderson and Boomer Esiason are here and have been eligible for a while now but are not in. Their lack of post-season success is the primary reason.<br /><br />In all, there are 10 QBs in this tier that have been eligible for many years, and 8 of those 10 are in. The active and recently retired QBs in this range are:</div><div><ul><li>Ben Roethlisberger</li><li>Phillip Rivers</li><li>Donovan McNabb</li><li>Russell Wilson</li></ul><div>I personally feel like Russell Wilson and Ben Roethlisberger will get in barring a complete career collapse. but Rivers and McNabb are iffier. Neither has a Super Bowl win, and neither dominated their peers statistically like Marino and Fouts. I would be surprised if both McNabb and Rivers get in. Based on the regression, Russell Wilson is in the best position followed closely by Roethlisberger, McNabb is in good shape, and Rivers is very iffy.<br /><br /></div></div><h4>No</h4><div>Scores at or below 0.85 define this tier. Only 1 QB on this list has made it to the Hall of Fame from this tier, and it was Troy Aikman with 3 Super Bowl wins. There's a huge number of recent and active QBs in this tier so I won't list them all. Instead, I will list the ones that I think actually have a chance at making it in:</div><div><ul><li><b>Matt Ryan:</b> still playing at a very high level, Falcons are a good team, had one of the best seasons of all time (2016), and a Super Bowl win is likely enough to get him in</li><li><b>Eli Manning: </b>played well enough throughout most of his career, has two Super Bowl MVP's, and stopped the Patriot's perfect season...the drama/underdog thing around that was really impactful</li><li><b>Cam Newton: </b>will likely end up as the most successful running QB in history, and has a somewhat unique style that makes him stand out</li></ul><div>Based on the regression results, Matt Ryan, Joe Flacco, and Eli Manning (in that order) have the best shot from this group. I just can't see Flacco getting in with such abysmal numbers compared with his peers. He never even made the pro bowl.</div><div><br /></div><h4>Final interesting note</h4><div>Based on the regression score, Warren Moon is the biggest shocker in the Hall of Fame. He is the only player below players that didn't get in. Rich Gannon, Joe Theismann, Ken Anderson, and Randall Cunningham all scored higher. However, much of his prime was in the CFL and is not accounted for here so that maybe played a big factor.</div><div><br /></div><div><br /></div></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-7785336049011746632019-02-27T21:32:00.000-08:002019-04-02T22:11:51.154-07:00Steve Young Might Have Been the Best Quarterback of All TimeWho was the best quarterback of all time? Here is my attempt at answering that question.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-HyDx-hhJFlE/XHduPDBjYuI/AAAAAAAAEz8/RD-sgnM_f8cBr1HmXVKl0NdSognPar2jgCLcBGAs/s1600/Brady%2Bcareer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://3.bp.blogspot.com/-HyDx-hhJFlE/XHduPDBjYuI/AAAAAAAAEz8/RD-sgnM_f8cBr1HmXVKl0NdSognPar2jgCLcBGAs/s1600/Brady%2Bcareer.png" width="0%" /></a></div><a name='more'></a><h4>Best Career Rankings</h4><div>I will detail the methodology below, but here are the rankings just to go ahead and get that out of the way...<br /><br /></div><div><div><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>Name</th><th>Stats</th><th>Success</th><th>Awards</th><th>Consistency</th><th>Total</th></tr><tr><td>Tom Brady</td><td>1.34</td><td>0.50</td><td>0.16</td><td>0.21</td><td>2.22</td></tr><tr><td>Peyton Manning</td><td>1.45</td><td>0.24</td><td>0.25</td><td>0.26</td><td>2.19</td></tr><tr><td>Steve Young</td><td>1.53</td><td>0.25</td><td>0.18</td><td>0.18</td><td>2.15</td></tr><tr><td>Joe Montana</td><td>1.12</td><td>0.42</td><td>0.16</td><td>0.19</td><td>1.89</td></tr><tr><td>Aaron Rodgers</td><td>1.28</td><td>0.26</td><td>0.12</td><td>0.17</td><td>1.82</td></tr><tr><td>Roger Staubach</td><td>1.06</td><td>0.39</td><td>0.10</td><td>0.14</td><td>1.70</td></tr><tr><td>Brett Favre</td><td>1.03</td><td>0.20</td><td>0.18</td><td>0.17</td><td>1.57</td></tr><tr><td>Drew Brees</td><td>1.08</td><td>0.13</td><td>0.14</td><td>0.21</td><td>1.57</td></tr><tr><td>Dan Marino</td><td>1.04</td><td>0.14</td><td>0.18</td><td>0.20</td><td>1.56</td></tr><tr><td>Dan Fouts</td><td>1.00</td><td>0.07</td><td>0.13</td><td>0.14</td><td>1.34</td></tr><tr><td>Ken Anderson</td><td>0.93</td><td>0.04</td><td>0.10</td><td>0.12</td><td>1.20</td></tr><tr><td>Kurt Warner</td><td>0.70</td><td>0.32</td><td>0.09</td><td>0.09</td><td>1.19</td></tr><tr><td>Ben Roethlisberger</td><td>0.72</td><td>0.25</td><td>0.04</td><td>0.12</td><td>1.13</td></tr><tr><td>John Elway</td><td>0.65</td><td>0.25</td><td>0.09</td><td>0.12</td><td>1.11</td></tr><tr><td>Terry Bradshaw</td><td>0.65</td><td>0.29</td><td>0.05</td><td>0.11</td><td>1.09</td></tr><tr><td>Philip Rivers</td><td>0.82</td><td>0.11</td><td>0.05</td><td>0.11</td><td>1.09</td></tr><tr><td>Fran Tarkenton</td><td>0.70</td><td>0.19</td><td>0.06</td><td>0.11</td><td>1.06</td></tr><tr><td>Russell Wilson</td><td>0.52</td><td>0.33</td><td>0.02</td><td>0.09</td><td>0.96</td></tr><tr><td>Donovan McNabb</td><td>0.61</td><td>0.23</td><td>0.04</td><td>0.07</td><td>0.95</td></tr><tr><td>Jim Kelly</td><td>0.56</td><td>0.23</td><td>0.08</td><td>0.07</td><td>0.94</td></tr><tr><td>Warren Moon</td><td>0.69</td><td>0.06</td><td>0.06</td><td>0.12</td><td>0.93</td></tr><tr><td>Bob Griese</td><td>0.56</td><td>0.16</td><td>0.10</td><td>0.09</td><td>0.90</td></tr><tr><td>Ken Stabler</td><td>0.51</td><td>0.20</td><td>0.10</td><td>0.07</td><td>0.88</td></tr><tr><td>Boomer Esiason</td><td>0.65</td><td>0.08</td><td>0.06</td><td>0.09</td><td>0.87</td></tr><tr><td>Tony Romo</td><td>0.61</td><td>0.07</td><td>0.04</td><td>0.10</td><td>0.82</td></tr><tr><td>Matt Ryan</td><td>0.58</td><td>0.10</td><td>0.05</td><td>0.07</td><td>0.80</td></tr><tr><td>Randall Cunningham</td><td>0.56</td><td>0.08</td><td>0.07</td><td>0.08</td><td>0.80</td></tr><tr><td>Troy Aikman</td><td>0.31</td><td>0.26</td><td>0.04</td><td>0.06</td><td>0.67</td></tr><tr><td>Rich Gannon</td><td>0.37</td><td>0.14</td><td>0.09</td><td>0.07</td><td>0.67</td></tr><tr><td>Joe Theismann</td><td>0.30</td><td>0.21</td><td>0.04</td><td>0.07</td><td>0.63</td></tr></tbody></table></div></div><br />Tom Brady just edges out first place. There are two major caveats though that I think are worth calling out that maybe push Manning and Young above him.<br /><ol><li>Peyton Manning went to Super Bowls with 4 different coaches while Brady has gone to all of his with the same, legendary coach. Peyton's consistency regardless of the team, coach, or system here has to count for something, but I can't think of a good way to include this. It's a general issue in blind scoring like this...how do you account for the fact that Montana and Young had Jerry Rice? How do you account for Brady playing for maybe the best coach ever?<br /><br />Manning has also won the most MVP's, so if you were to weight the 'how good did people think he was?' scoring higher, he would take 1st place.</li><br /><li>Steve Young lost much of his career. He was stuck behind Montana for part of it, he played in the USFL for part of it, and he was severely injured many times. While he played, he had 6-7 years where he was head and shoulders above everyone else, including greats like John Elway, Dan Marino, Brett Favre, Jim Kelly, Warren Moon, Rich Gannon, Boomer Esiason, and Troy Aikman. As you'll see below, the scores do try to account for this, but they can't fully eliminate this factor.</li></ol><div>All 3 of them got to play with legendary receivers and/or tight ends. It's hard to account for that. In order, that probably helped Young most (Rice best of all time) and Brady least (Gronkowski and Moss top-5 of all time at their positions).</div><div><br /></div><div>Looking at the list, no current players seem likely to crack the top-3. Aaron Rodgers at #5 is an obvious favorite, but he'd need to both play better going forward and win more Super Bowls. He's already 35, and his performance is trending down. Patrick Mahomes had an incredible 2018 and doing that every year for a decade would do it, but it's way too early to know if he will. Other greats from this generation like Phillip Rivers and Drew Brees just aren't quite there from these metrics.</div><div><br /></div><h4>Best seasons</h4><div>Focusing on just the stats score, here are the best QB seasons of all time:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>Player</th><th>Year</th><th>Stat Score</th></tr><tr><td>Steve Young</td><td>1994</td><td>2.25</td></tr><tr><td>Aaron Rodgers</td><td>2011</td><td>2.11</td></tr><tr><td>Tom Brady</td><td>2007</td><td>2.08</td></tr><tr><td>Steve Young</td><td>1992</td><td>2.05</td></tr><tr><td>Peyton Manning</td><td>2004</td><td>2.02</td></tr><tr><td>Dan Marino</td><td>1984</td><td>1.96</td></tr><tr><td>Joe Montana</td><td>1989</td><td>1.90</td></tr><tr><td>Kurt Warner</td><td>1999</td><td>1.89</td></tr><tr><td>Peyton Manning</td><td>2013</td><td>1.83</td></tr><tr><td>Matt Ryan</td><td>2016</td><td>1.74</td></tr></tbody></table></div><div><br />Of the top-20 seasons of all time, more than half are from Young (4), Manning (3), Brady (2), and Rodgers (3). For reference, Patrick Mahomes' 2018 season was 14th best of all time with a score of 1.64.</div><div><br /><b>Why claim Young might have been the best?</b><br /><b><br /></b></div><div>Before I move on to the methodology, I'll try to justify the title. I put less value on Super Bowl wins than most others that I talk to since they are so dependent on luck and the quality of your team's defense. I know there are gaps in using stats only...QBs playing from behind can rack up yards and touchdowns...QBs in the lead will run out the clock...clutch performance is important...it still feels a bit less random than Super Bowl wins.<br /><br />As a starter for only 60% or so of the typical prime years for a top QB, Steve Young managed to still have more top-20 (statistical) seasons than any other QB in history. He is the only QB with 2 of the top 5 of all time, and he had #1 of all time.</div><div><br /></div><div>To give an example of how great he was compared with his peers, his passer rating in 1994 was 112.8. The 2nd best rating in the league that year was Brett Favre's 90.7. Contrast that with Aaron Rodgers' best season (2011). His passer rating that year was an incredible 122.5, but three other players had ratings over 100 (Brees, Brady, and Romo), and #2 that year (Brees) was 110.6. </div><div><br /></div><div>Young holds the record for most seasons with a passer rating over 100, and he did it back in the 1990's when a rating over 100 was exceptional. As an example of this, <b>QBs had season passer ratings over 100 only 7 times between 1991 and 1997. 6 of those 7 performances were Steve Young.</b></div><div><br /></div><div>Passer rating is not the full story, but the full story actually makes Young look even better...not only was his passer rating insanely high compared with his peers, but he also ran for 7 touchdowns in 1994 (#2 was Elway with 4 and #3 was Cunningham with 3). <b>He holds the modern NFL record for most rushing touchdowns by a quarterback. He holds the league record for most post-season rushing yards and rushing touchdowns for a QB. He achieved both of those even though he missed much of his prime. </b></div><div><br /></div><div>Given all of that, I can't convince myself that Brady is a better QB than Young was. More successful yes...better overall? I just can't get there.</div><div><br /></div><div><h4>Methodology</h4><div>What does it mean to be the best? I settled on four criteria for quarterbacks:</div><div><ul><li><b>statistical performance:</b> how much of a statistical outlier was this player?</li><li><b>awards:</b> how did the media and fans rank this player against his peers?</li><li><b>success: </b>QB's are generally team leaders...how successful was his team?</li><li><b>consistency: </b>how long was this player at the top of the league?</li></ul><div><a href="http://www.somesolvedproblems.com/p/sports.html">For statistical performance, I used the methodology described here</a>. I used the top 24 quarterbacks in each season, used 7 years for a quarterback's prime, and used the following weighting:<br /><ul><li>total touchdowns per game (0.15)</li><li>total yards per game (0.15)</li><li>passing yards per attempt (0.2)</li><li>passing touchdowns per attempt (0.2)</li><li>interceptions per attempt (-0.2)</li><li>turnovers per game (-0.1)</li></ul><div>The resulting score is roughly 'number of standard deviations above his peers during his prime.' </div></div></div></div><div><br /></div><div>For awards, I considered only pro-bowl and all-pro voting. The algorithm is:<br /><ul><li>get # of AP 1st team all-pros, multiply by 0.2, and divide by 7; score is capped at 0.2</li><li>get # of AP 2nd team all-pros; take min of that number and '# of seasons - # of 1st team all-pro seasons'; multiply by 0.1, and divide by 7; score is capped at 0.1</li><li>get # of PB appearances (not as replacement); multiply by 0.05 and divide by 7; score is capped at 0.05</li><li>sum those three</li></ul><div>The max score is 0.25 and corresponds to making at least 7 1st team all-pro teams and 7 pro bowls. Peyton Manning is the only QB to do this.</div></div><div><br /></div><div>For success, I used post-season results. If a QB wins the Super Bowl every year that they're the starter, they get a score of 1. They get equivalently fewer points for each level up to that (e.g., 0.75 for losing in the Super Bowl).</div><div><br /></div><div>For consistency, I gave 10 points per season in which he was the top QB, 9 points if he was #2, and so on. They did not receive negative points, so it bottomed out at 0 in a season. I then scaled it so that a player that spent 10 seasons as the #1 player would get a score of 0.25. I view longevity as the least important metric, so this is weighted lower than the other metrics. </div><div><br /></div><h4>Comparing top 3 again</h4><div>Just to show their full profiles, here are the stat scores for each of the top-3 starting at the first year in which they were a top-24 QB in the NFL and ending with the last year in which they were a top-24 QB in the NFL:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-HyDx-hhJFlE/XHduPDBjYuI/AAAAAAAAEz8/RD-sgnM_f8cBr1HmXVKl0NdSognPar2jgCLcBGAs/s1600/Brady%2Bcareer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://3.bp.blogspot.com/-HyDx-hhJFlE/XHduPDBjYuI/AAAAAAAAEz8/RD-sgnM_f8cBr1HmXVKl0NdSognPar2jgCLcBGAs/s1600/Brady%2Bcareer.png" width="80%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/--BjttYqDvfM/XHduPLX5RgI/AAAAAAAAEz4/PPoVr92Sijg2cyrS9mKmOQ0pVq3envBJACLcBGAs/s1600/Manning%2Bcareer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://2.bp.blogspot.com/--BjttYqDvfM/XHduPLX5RgI/AAAAAAAAEz4/PPoVr92Sijg2cyrS9mKmOQ0pVq3envBJACLcBGAs/s1600/Manning%2Bcareer.png" width="80%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/--LeYlpmvMz0/XHd53O5wu2I/AAAAAAAAE0M/z3kamsoRIVcJkG-fYCg2j5yyt8tRzxl1ACLcBGAs/s1600/Young%2Bcareer.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://4.bp.blogspot.com/--LeYlpmvMz0/XHd53O5wu2I/AAAAAAAAE0M/z3kamsoRIVcJkG-fYCg2j5yyt8tRzxl1ACLcBGAs/s1600/Young%2Bcareer.png" width="80%" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><div><br /></div><div><br /></div><div>You can clearly see the loss in productivity for Young due to joining the NFL late, sitting behind Montana, then retiring early from injuries. It's also cool to see the lack of negative years. A negative value doesn't mean bad here...it just means worse than the typical starter that year. Even given that, these guys just didn't have many down years. For reference on what negative means here, Cam Newton, Baker Mayfield, and Dak Prescott all had negative years in 2018. Andrew Luck was just barely positive (0.19).</div><div><br /></div><h4>Conclusion</h4><div>In case it wasn't completely obvious, I find Steve Young's peak fascinating and it's frustrating that it was shortened to the point that it's hard to compare him with other QBs. He's like the Barry Sanders of QBs. </div><div><br /></div><div>I also really like this methodology for comparing QB statistical performance. A passer rating of 100 now isn't nearly as impressive as it was 30 years ago, and this accounts for that fairly well. Let me know why I'm wrong about all of this in the comments.</div><div><br /></div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-58777705060936996332019-02-22T22:47:00.003-08:002019-02-23T09:20:07.235-08:00Tutorial: Writing a Vue App from Start to DeploymentI didn't see any nice tutorials from start to finish on writing and deploying a Vue app, so I wrote down all the steps with a sample project. The sample project uses <a href="https://vuetifyjs.com/en/">Vuetify</a> as the Vue framework and <a href="https://plot.ly/javascript/">plotly</a> as the plotting framework. To get started, <a href="http://cityprojections.com/vuetest/">here is the finished product</a>.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-fkBUua6F9gE/XHDsHXWuc-I/AAAAAAAAEyI/1zvjH9hor3Al8OiX3RRUx6qgRwb2qE6rwCLcBGAs/s1600/sample%2Bimage.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="779" data-original-width="1454" height="auto" src="https://3.bp.blogspot.com/-fkBUua6F9gE/XHDsHXWuc-I/AAAAAAAAEyI/1zvjH9hor3Al8OiX3RRUx6qgRwb2qE6rwCLcBGAs/s1600/sample%2Bimage.png" width="80%" /></a></div><br /><a name='more'></a><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Intro</h4><div>This tutorial assumes that you have an IDE and npm and that you have basic JavaScript knowledge. You should also look through the <a href="https://vuejs.org/v2/guide/index.html">Vue guide</a>. If you don't know what IDE to use, I used <a href="https://code.visualstudio.com/">Visual Studio Code</a> for this and it's pretty great.</div><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Installing Vue CLI 3 and getting a project</h4><div>Run the following command:<br /><br /><i>npm install -g @vue/cli</i></div><div><br /></div><div>Next run:<br /><br /><i>vue create vue-test</i></div><div><br /></div><div>Pick the defaults. This should have created a project with a bunch of files for you. Now switch your environment/workspace/folder/whatever it's called in your IDE to that path of the project you created. you should see something like this:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-7rYgvF0zNZE/XHDrmMVWbvI/AAAAAAAAEyA/qi5TTjlLjfclpAFYMDBvQqhNjqg1qDoUACEwYBhgL/s1600/initial%2Bproject%2Bfiles.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="549" data-original-width="322" height="auto" src="https://3.bp.blogspot.com/-7rYgvF0zNZE/XHDrmMVWbvI/AAAAAAAAEyA/qi5TTjlLjfclpAFYMDBvQqhNjqg1qDoUACEwYBhgL/s1600/initial%2Bproject%2Bfiles.png" width="40%" /></a></div><br /></div><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Adding <a href="https://vuetifyjs.com/en/">Vuetify</a></h4><div>Run the following command:</div><div><br /></div><div><i>vue add vuetify</i></div><div><br /></div><div>Choose default. Open the 'main.js' file in your project (src/main.js). Add this under the <i>import App from './App.vue' </i>line:</div><div><br /></div><div><div><i>import Vuetify from 'vuetify'</i></div><div><i><br /></i></div><div><i>Vue.use(Vuetify);</i></div><div><i><br /></i></div><div><i>import 'vuetify/dist/vuetify.min.css'</i></div></div><div><br /></div><div>Your can now use <a href="https://vuetifyjs.com/en/">Vuetify</a>.</div><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Adding <a href="https://plot.ly/javascript/">plotly</a></h4><div>Run the following command:<br /><br /><i>npm install --save plotly.js</i></div><div><br /></div><div>Open the 'HelloWorld.vue' file in your project (src/components/HelloWorld.vue). Change the name to 'PlotlyGraph.vue'. Replace the contents of the file with <a href="https://github.com/rhamner/vue-test/blob/master/src/components/PlotlyGraph.vue">this code</a>. For reference on how this plotly implementation works, <a href="http://www.somesolvedproblems.com/2018/05/how-to-use-plotly-in-vue.html">see this tutorial</a> I wrote a while back. This is just an extension of that.</div><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Adding <a href="https://www.npmjs.com/package/bigeval">BigEval</a></h4><div>If you clicked the example at the start, you might have noticed a 'custom' button. Clicking that lets you type in an equation for it to plot. To interpret that equation, you can use <a href="https://www.npmjs.com/package/bigeval">BigEval</a>. </div><div><br /></div><div>Run the following command:</div><div><br /></div><div><i>npm install bigeval</i></div><div><br /></div><div>That's it.</div><div><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Writing the page</h4><div>I won't paste the entirety of the page here because you can simply open the code and look at it. I will walk through how a few key pieces work to give some insight. To run the app, use the following command:<br /><br /><i>npm run serve</i><br /><i><br /></i><a href="https://github.com/rhamner/vue-test/blob/master/src/App.vue">Here is the code for the page</a>.</div><div><br /></div><div><b>Using plotly in vue</b></div><div><br /></div><div>The plotlygraph component above exposes 'propData', 'divId', 'height', and 'width' as props. 'propData' is for all information that gets passed into methods like Plotly.newPlot. 'divId' is the id of the div the component creates, and is primarily used to identify the source of events that reach the caller. 'height' and 'width' let you control the size of the plot in the div containing the component. </div><div><br /></div><div>'propData' has three members:<br /><ul><li>data: array of trace objects</li><li>layout: object with layout information</li><li>config: object with plotly config info</li></ul><div>This example builds up this object, then sets it at the end of a single method called 'updatePlot'.<br /><br />The app also listens to the plot's 'afterplot' event. That is triggered whenever the plot updates, and the app logs the time that it fires below the plot.<br /><br /><b>Triggering plot updates</b><br /><b><br /></b>Vue watches are used to update the plot when things happen and they are registered in 'mounted'. There are three inputs that have watches on them:<br /><ul><li>selectedFunction: which function you selected to plot</li><li>A: value of A</li><li>B: value of B</li></ul><div>Watches will simply execute the method attached to them ('updatePlot' in this case) whenever the variable being watched changes. I did not add a watch for the equation...I'm using @change to trigger it so that there are examples of calling 'updatePlot' through both mechanisms.</div><div><br /></div><div><b>Interpreting the equation</b></div><div><b><br /></b></div><div>The equation that's typed in is a string. The code converts that string into something BigEval can understand using the following replacements:</div><div><ul><li>y, =, and spaces are converted to nothing</li><li>x is replaced by the current value of the iterator</li><li>A is replaced by the value in A's input</li><li>B is replaced by the value in B's input</li></ul><div>Thus, something like <i>y = A*x + B*sin(x)</i> becomes <i>4*2 + 3*0.0349</i> for A = 4, B = 3, and i = 2.</div></div><div><br /></div><div>You'll notice that the equation has some validation attached to it. I used two crude rules for that:<br /><ul><li>equation string can not be empty</li><li>when x = -1, equation string can not result in NaN or ERROR; this is not robust but works well enough for a quick example</li></ul><div>The code uses the 'rules' prop to call the 'validEquation' method to perform this validation. This comes from Vuetify.</div></div><div><br /></div><h4>Building and deploying the app</h4><div>Once everything is implemented, build and deploy is refreshingly easy. Simply run the following command:<br /><br /><i>npm run build</i></div><div><i><br /></i></div><div>After this finishes, it will create a 'dist' folder under your directory. This is all you need. It contains an index.html file and all of the necessary dependencies. Simply move those to your host, and the page is live. You will need to update the href values in index.html to match whatever hierarchy you have for your files, but that's it.<br /><br />For hosting, there are many options. I have a hostgator account for another site (<a href="http://blog.cityprojections.com/">http://blog.cityprojections.com/</a>) so I just used it. I've used <a href="https://www.000webhost.com/">https://www.000webhost.com/</a> in the past and they were fine. Just google it and pick one you like.<br /><h4></h4><h4></h4><h4></h4><h4><br /></h4><h4>Conclusion</h4></div><div>This does not cover everything in Vue obviously, but it does show:</div><div><ul><li>setting up a project</li><li>using many basic features in vue (events, watches, binding, v-for, using a framework)</li><li>pulling in external packages</li><li>building a project</li></ul><div>Some big pieces that you should read up on that aren't included here are:<br /><ul><li><a href="https://vuex.vuejs.org/">Vuex</a></li><li><a href="https://vuejs.org/v2/guide/routing.html">Routing</a></li><li><a href="https://vuejs.org/v2/guide/unit-testing.html">Unit testing</a></li></ul><br />Hopefully this helps with getting started on something larger than a typical example.</div></div><div><br /></div><div><a href="https://github.com/rhamner/vue-test">The full repo is here.</a></div><div><br /></div><div><br /></div><br /><br /></div></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-71852453320524115642019-02-14T21:52:00.000-08:002019-02-14T21:52:49.320-08:00What Does R^2 Mean in Linear Regression?You see r^2 constantly when you see linear fits or linear regression. You'll often hear that it represents '% of variance explained by the model'. What does that mean?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-YX_RuReKgtI/XGZTWHJCl3I/AAAAAAAAExY/GaewenA22UEWhPx9GJR8YuM5rIAJnwdiQCLcBGAs/s1600/image%2B%25281%2529.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="646" data-original-width="1122" height="auto" src="https://4.bp.blogspot.com/-YX_RuReKgtI/XGZTWHJCl3I/AAAAAAAAExY/GaewenA22UEWhPx9GJR8YuM5rIAJnwdiQCLcBGAs/s1600/image%2B%25281%2529.png" width="80%" /></a></div><br /><a name='more'></a><h4>Data</h4><div>I've generated a fake blood pressure data set. The set contains blood pressure (systolic; BP throughout), distance from a freeway broken into 4 categories, and income level broken into 2 categories. The BP is equal to 130 - 10*[income level (0 or 1)] - 5*[distance to road (0, 1, 2, or 3)].<br /><br /></div><h4>Run with perfect data</h4><div>To start with, there is no noise in the data and each distance to road group is 80% low-income and 20% high-income. That is, there is no correlation between 'distance to road' and 'income level'. Trying out three regression models, the results are:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr style="background-color: #e6a117;"><th>Model</th><th>R^2</th></tr><tr><td>BP = C*(distance to road)</td><td>0.66</td></tr><tr><td>BP = C*(income level)</td><td>0.34</td></tr><tr><td>BP = C1*(distance to road) + C2*(income level)</td><td>1.00</td></tr></tbody></table><br /><div><br />Considering only one of the variables gives you an r^2 of either 0.66 or 0.34. Considering both gives you an r^2 of 1. This is a meaning of '% of variance explained by the model'. The model is the sum of two components. Including one or the other explains <= 100% of the result. Including both explains 100% of it.</div></div><div><br /></div><div>Since there was no correlation between the two components, each one is equal to 100% minus the other.<br /><br /></div><h4>Run with partially correlated data</h4><div>Now the data set is adjusted so that as distance from the road increases, average income increases. In the 0 distance bin, 8% of results are high-income. In the 1, 2, and 3 distance bins, 16%, 24%, and 32% are high-income respectively. Trying out three regression models again, our results are:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr style="background-color: #e6a117;"><th>Model</th><th>R^2</th></tr><tr><td>BP = C*(distance to road)</td><td>0.73</td></tr><tr><td>BP = C*(income level)</td><td>0.48</td></tr><tr><td>BP = C1*(distance to road) + C2*(income level)</td><td>1.00</td></tr></tbody></table><br /><div>Notice the change here. The individual r^2's no longer sum to 1. Because 'distance to road' and 'income level' are correlated here, the apparent effects of each one in isolation are amplified.<br /><br /></div></div><h4>Run with noisy data</h4><div>Now the original, perfect data set is adjusted to add some random variance to the results. The new model is 130 - 10*[income level (0 or 1)] - 5*[distance to road (0, 1, 2, or 3)] + noise with a stdev of 5. Trying out three regression models again, our results are:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr style="background-color: #e6a117;"><th>Model</th><th>R^2</th></tr><tr><td>BP = C*(distance to road)</td><td>0.43</td></tr><tr><td>BP = C*(income level)</td><td>0.22</td></tr><tr><td>BP = C1*(distance to road) + C2*(income level)</td><td>0.65</td></tr></tbody></table><br /><div>The complete model no longer has an r^2 of 1. This is because it does not explain the variance introduced by the noise that was added to the results.<br /><br /></div></div><h4>Does correlation imply causation?</h4><div>Another phrase you'll often hear is 'correlation does not imply causation'. What does that mean? The rough summary is that two variables having a high r^2 when plotted against each other doesn't necessarily mean that one variable affects the other. We can see it clearly with an example.</div><div><br /></div><div>Taking our original, perfect data set, assume that all high-income subjects paint their houses blue, and all low-income subjects paint their houses red. Add 'house color' to the model (0 for red and 1 for blue). If I try out a regression model of BP = C*(house color), I get an r^2 of 0.34. House color is not the cause of the blood pressure drop...income is. However, income explains both the blood pressure drop and the house color, so house color and blood pressure do have a relationship, but neither one causes the other.</div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-1635822496522185692019-02-08T18:08:00.000-08:002019-02-08T19:21:22.030-08:00Find All Ints Less Than N with M Digits Equal to 1 in Their Binary RepresentationWe stumbled on a fun algorithm problem in the wild at work this week.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-vvDkWNcTNIg/XF41W0iVw2I/AAAAAAAAExE/l3xTfrY15v04Pqs9AJUJLfH6uSPBOBQXQCLcBGAs/s1600/complexity%2Bcomparison.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="581" data-original-width="1097" height="auto" src="https://2.bp.blogspot.com/-vvDkWNcTNIg/XF41W0iVw2I/AAAAAAAAExE/l3xTfrY15v04Pqs9AJUJLfH6uSPBOBQXQCLcBGAs/s1600/complexity%2Bcomparison.png" width="0%" /></a></div><a name='more'></a><h4>Real Life Problem</h4><div>We needed to test commands coming into a system on two paths to make sure there were no race conditions in the processing. Example commands are 'set mode = Voltage; set Range = 10' and 'set mode = Current; set Range = 1'. To test this, we needed to generate an array that consisted of all combinations of the two command arrays such that the order of elements from each array was maintained.</div><div><br /></div><div>As an example, consider arrays with commands 1, 2 and commands a, b. We wanted all combinations of those two such that 2 was always after 1, and b was always after a:<br /><br />1,2,a,b</div><div>1,a,2,b</div><div>a,1,2,b</div><div>1,a,b,2</div><div>a,1,b,2</div><div>a,b,1,2</div><div><br /></div><h4>Solution</h4><div>It turns out that this problem is the same as finding all ints less than 2^N where N is the length of the combined array, such that the number of 1's in their binary representation is equal to the length of the second array. To see this, consider the same problem as above but replace 1, 2 with 0, 0, and a,b with 1,1:<br /><br />0011 = 3</div><div>0101 = 5</div><div>1001 = 9</div><div>0110 = 6</div><div>1010 = 10</div><div>1100 = 12</div><div><br /></div><div>N = 4, so 2^N is 16. That list is all ints less than 16 with two 1's in their binary representation.</div><div><br /></div><h4>Algorithms</h4><div>How do you find those?</div><div><br /></div><div>Our simplest solution was to generate all numbers from 1 to 2^N and count the 1's in their binary representations. Here is a sample implementation:</div><div><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #f92672;">import</span> <span style="color: #f8f8f2;">time;</span> <br /><span style="color: #f8f8f2;">ms</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">time</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">time()</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">1000.0</span><br /><br /><span style="color: #f8f8f2;">list1</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">10</span><br /><span style="color: #f8f8f2;">list2</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">]</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">10</span><br /><span style="color: #f8f8f2;">tlen</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">len(list1)</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">len(list2)</span><br /><br /><span style="color: #75715e;">#get array of all powers of 2 needed here</span><br /><span style="color: #f8f8f2;">twos</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">{}</span><br /><span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(tlen):</span><br /> <span style="color: #f8f8f2;">twos[i]</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">2</span><span style="color: #f92672;">**</span><span style="color: #f8f8f2;">i</span><br /> <br /><span style="color: #75715e;">#generate all numbers up to 2^total length</span><br /><span style="color: #f8f8f2;">results</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[]</span><br /><span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(</span><span style="color: #ae81ff;">2</span><span style="color: #f92672;">**</span><span style="color: #f8f8f2;">tlen):</span><br /> <br /> <span style="color: #75715e;">#get binary representation of i</span><br /> <span style="color: #f8f8f2;">btemp</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[int(x)</span> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">x</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">bin(i)[</span><span style="color: #ae81ff;">2</span><span style="color: #f8f8f2;">:]]</span><br /> <br /> <span style="color: #75715e;">#keep number if sum of digits is correct</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">sum(btemp)</span> <span style="color: #f92672;">==</span> <span style="color: #f8f8f2;">len(list2):</span><br /> <span style="color: #f8f8f2;">val</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">0</span><br /> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(len(btemp)):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">btemp[j]</span> <span style="color: #f92672;">==</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">:</span><br /> <span style="color: #f8f8f2;">len(btemp)</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">j</span><br /> <span style="color: #f8f8f2;">val</span> <span style="color: #f92672;">+=</span> <span style="color: #f8f8f2;">twos[len(btemp)</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">]</span><br /> <br /> <span style="color: #f8f8f2;">results</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">append(val)</span><br /> <br /><span style="color: #66d9ef;">print</span><span style="color: #f8f8f2;">(time</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">time()</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">1000.0</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">ms)</span><br /></pre></div><br /></div><div>This is really clean and simple. There's one huge problem though. If you work through the complexity of the algorithm, it's O(2^N). That's enormous. If each array is 25 elements, that's ~1x10E15 iterations. For a typical computer, your execution time is measured in years for that. Can you do better?</div><div><br /></div><div>Look at the pattern of numbers in the example above. You can see a general trend:</div><div><ul><li>start with 2^M - 1 where M is the length of the second array</li><li>shift the leftmost '1' a single position to the left</li><li>repeat until it reaches the leftmost position</li><li>if next '1' element is greater than 1 position to the right of the leftmost '1', shift it down one, set the leftmost 1 position left of that '1', and repeat; do this for all 1's</li><li>when all 1's are adjacent and leftmost is in first position, you're done</li></ul><div>What does that look like? Here is a sample implementation. 'Refs' is just an array tracking the positions of the '1' elements in the array:</div></div><div><!-- HTML generated using hilite.me --><br /><div style="background: #272822; border-width: 0.1em 0.1em 0.1em 0.8em; border: solid gray; overflow: auto; padding: 0.2em 0.6em; width: auto;"><pre style="line-height: 125%; margin: 0;"><span style="color: #f92672;">import</span> <span style="color: #f8f8f2;">time;</span> <br /><span style="color: #f8f8f2;">ms</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">time</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">time()</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">1000.0</span><br /><br /><span style="color: #f8f8f2;">list1</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">10</span><br /><span style="color: #f8f8f2;">list2</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">]</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">10</span><br /><span style="color: #f8f8f2;">tlen</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">len(list1)</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">len(list2)</span><br /><br /><span style="color: #75715e;">#build list of refs that tracks 1's in the array</span><br /><span style="color: #f8f8f2;">refs</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span><span style="color: #f92672;">*</span><span style="color: #f8f8f2;">len(list2)</span><br /><span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(len(list2)):</span><br /> <span style="color: #f8f8f2;">refs[i]</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">len(list1)</span> <span style="color: #f92672;">+</span> <span style="color: #f8f8f2;">i</span><br /> <br /><span style="color: #f8f8f2;">root</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">2</span><span style="color: #f92672;">**</span><span style="color: #f8f8f2;">(len(list2))</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><br /><br /><span style="color: #75715e;">#build dictionary of powers of two that are needed</span><br /><span style="color: #f8f8f2;">twos</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">{}</span><br /><span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(tlen):</span><br /> <span style="color: #f8f8f2;">twos[i]</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">2</span><span style="color: #f92672;">**</span><span style="color: #f8f8f2;">i</span><br /> <br /><span style="color: #75715e;">#run until all refs are in their leftmost position</span><br /><span style="color: #f8f8f2;">flag</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">True</span><br /><span style="color: #66d9ef;">while</span><span style="color: #f8f8f2;">(flag):</span><br /> <br /> <span style="color: #75715e;">#walk leftmost ref down to zero</span><br /> <span style="color: #66d9ef;">while</span> <span style="color: #f8f8f2;">refs[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span> <span style="color: #f92672;">>=</span> <span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">:</span><br /> <span style="color: #f8f8f2;">root</span> <span style="color: #f92672;">+=</span> <span style="color: #f8f8f2;">twos[tlen</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">refs[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">]</span><br /> <span style="color: #f8f8f2;">refs[</span><span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">]</span> <span style="color: #f92672;">-=</span> <span style="color: #ae81ff;">1</span><br /><br /> <span style="color: #75715e;">#check if all refs in leftmost position; if not, shift according to first ref that can be left-shifted</span><br /> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">range(</span><span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">,</span> <span style="color: #f8f8f2;">len(refs)):</span><br /> <span style="color: #66d9ef;">if</span> <span style="color: #f8f8f2;">refs[i]</span> <span style="color: #f92672;">></span> <span style="color: #f8f8f2;">i:</span><br /> <span style="color: #f8f8f2;">refs[i]</span> <span style="color: #f92672;">-=</span> <span style="color: #ae81ff;">1</span><br /> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">i</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><br /> <span style="color: #66d9ef;">while</span><span style="color: #f8f8f2;">(j</span> <span style="color: #f92672;">>=</span> <span style="color: #ae81ff;">0</span><span style="color: #f8f8f2;">):</span><br /> <span style="color: #f8f8f2;">refs[j]</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">refs[j</span> <span style="color: #f92672;">+</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">]</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><br /> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">-=</span> <span style="color: #ae81ff;">1</span><br /> <span style="color: #f8f8f2;">root</span> <span style="color: #f92672;">=</span> <span style="color: #ae81ff;">0</span><br /> <span style="color: #66d9ef;">for</span> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">in</span> <span style="color: #f8f8f2;">refs:</span><br /> <span style="color: #f8f8f2;">root</span> <span style="color: #f92672;">+=</span> <span style="color: #f8f8f2;">twos[(tlen</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">j</span> <span style="color: #f92672;">-</span> <span style="color: #ae81ff;">1</span><span style="color: #f8f8f2;">)]</span><br /> <span style="color: #f8f8f2;">flag</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">True</span><br /> <span style="color: #66d9ef;">break</span><br /> <span style="color: #f8f8f2;">flag</span> <span style="color: #f92672;">=</span> <span style="color: #f8f8f2;">False</span><br /> <br /><span style="color: #66d9ef;">print</span><span style="color: #f8f8f2;">(time</span><span style="color: #f92672;">.</span><span style="color: #f8f8f2;">time()</span><span style="color: #f92672;">*</span><span style="color: #ae81ff;">1000.0</span> <span style="color: #f92672;">-</span> <span style="color: #f8f8f2;">ms)</span><br /></pre></div><br /></div><div>The 'walk leftmost ref down to zero' loop has a weird 'add 2^(tlen - ref[0] - 1)' line. What is that? Consider the number 011. That's 3. Now consider 101. That's 5. What you did to get from 3 to 5 is drop the 2^1 bit and add a 2^2 bit. 0101 to 1001 is 5 to 9. You dropped a 2^2 and added a 2^3. 2^2 - 2^1 = 2. 2^3 - 2^2 = 4. This continues. 2^(x) - 2^(x - 1) = 2^(x - 1). That line is thus handling that bit swap by just adding the correct 2^x value.</div><div><br /></div><div>You can write this with essentially identical logic using arrays of 0's and 1's and swapping adjacent indices if you want. That algorithm was actually very slightly faster for me, but is a bit harder to read so I went with this one here.</div><div><br /></div><div>What's the complexity? Letting N be the length of the combined array and M be the length of the second array, the complexity is O(N!/[(N - M)!*M!]). It might not be obvious, but that's much better than O(2^N). If each array is 25 elements, that's ~1E14 iterations...a full order of magnitude better than 2^N. I've included a plot of those two below assuming N = 2*M:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-vvDkWNcTNIg/XF41W0iVw2I/AAAAAAAAExE/l3xTfrY15v04Pqs9AJUJLfH6uSPBOBQXQCLcBGAs/s1600/complexity%2Bcomparison.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="581" data-original-width="1097" height="auto" src="https://2.bp.blogspot.com/-vvDkWNcTNIg/XF41W0iVw2I/AAAAAAAAExE/l3xTfrY15v04Pqs9AJUJLfH6uSPBOBQXQCLcBGAs/s1600/complexity%2Bcomparison.png" width="80%" /></a></div><br /></div><div>That also happens to be the number of combinations. Since that scales as the number of combinations and you need to generate all combinations, I don't think a less 'complex' algorithm is possible here. However, there are some computational tricks that you can use that I did not include here because they complicate the code and didn't significantly increase the performance when I tried them. </div><div><br /></div><div>An example is that every even result is 2 times one of the other results. Thus, you can stop once you've found all odd results, then simply multiply those by 2 until you reach >= 2^N to get the remaining results. You can also use bit shifts and logical operations, but they also complicate the code more than I wanted for an overview here.<br /><br /></div><h4>Conclusion</h4><div>This was a fun problem. It feels like a complex interview question, and it seems like these only pop up a couple of times a year in normal work so it's always a bit exciting when they do.</div><div><br /></div><div><br /></div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-81296367231480374222019-02-03T12:37:00.000-08:002019-02-03T12:40:33.760-08:00Antonio Brown Is on Track to Be One of the Best Wide Receivers in HistoryWho were the best wide receivers ever? Here is my attempt at answering that question...<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-EMXhOjBR884/XFEWh91i9rI/AAAAAAAAEv0/oHj_2M73Ljk74TWOXEGlhZ6frDcY-xBMACLcBGAs/s1600/active.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1079" height="auto" src="https://1.bp.blogspot.com/-EMXhOjBR884/XFEWh91i9rI/AAAAAAAAEv0/oHj_2M73Ljk74TWOXEGlhZ6frDcY-xBMACLcBGAs/s1600/active.png" width="0%" /></a></div><div><a name='more'></a><h4>Best Career Rankings</h4></div><div>The best way to interpret these rankings are that they answer the following question: 'Which wide receiver's prime was strongest when compared with his direct peers?'</div><div><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr style="background-color: #e6a117;"><th>Name</th><th>Stat Score</th><th>Award Score</th><th>Total Score</th></tr><tr><td>Jerry Rice</td><td>2.18</td><td>0.25</td><td>2.43</td></tr><tr><td>Marvin Harrison</td><td>1.59</td><td>0.21</td><td>1.80</td></tr><tr><td>Antonio Brown</td><td>1.44</td><td>0.22</td><td>1.66</td></tr><tr><td>Terrell Owens</td><td>1.36</td><td>0.24</td><td>1.60</td></tr><tr><td>Randy Moss</td><td>1.24</td><td>0.22</td><td>1.46</td></tr><tr><td>Steve Largent</td><td>1.08</td><td>0.17</td><td>1.25</td></tr><tr><td>Calvin Johnson</td><td>1.01</td><td>0.21</td><td>1.22</td></tr><tr><td>Sterling Sharpe</td><td>0.98</td><td>0.18</td><td>1.16</td></tr><tr><td>Michael Irvin</td><td>0.85</td><td>0.14</td><td>0.99</td></tr><tr><td>Torry Holt</td><td>0.81</td><td>0.17</td><td>0.98</td></tr><tr><td>Cris Carter</td><td>0.76</td><td>0.19</td><td>0.95</td></tr><tr><td>Larry Fitzgerald</td><td>0.78</td><td>0.17</td><td>0.95</td></tr><tr><td>Andre Johnson</td><td>0.74</td><td>0.19</td><td>0.94</td></tr><tr><td>Julio Jones</td><td>0.73</td><td>0.19</td><td>0.92</td></tr><tr><td>Andre Rison</td><td>0.77</td><td>0.14</td><td>0.91</td></tr><tr><td>Brandon Marshall</td><td>0.74</td><td>0.17</td><td>0.90</td></tr><tr><td>Mark Clayton</td><td>0.68</td><td>0.14</td><td>0.82</td></tr><tr><td>Cliff Branch</td><td>0.72</td><td>0.04</td><td>0.77</td></tr><tr><td>Steve Smith</td><td>0.57</td><td>0.17</td><td>0.74</td></tr><tr><td>Reggie Wayne</td><td>0.55</td><td>0.17</td><td>0.71</td></tr></tbody></table><b><i><br /></i></b>The methodology used here requires at least 6 prime years for a player (see below). Some new players that might make the list are not included. Among current players, Antonio Brown is the best, and already has enough time in the NFL to make the list here. DeAndre Hopkins is actually on pace to be #6 all-time if he keeps doing what he's doing. As you can see above, Larry Fitzgerald and Julio Jones are also on the list.<br /><br />Odell Beckham Jr is also on track to make it. He is trending down however, so he'll need 2-3 more years better than his 2018 performance to make it. Davante Adams and Michael Thomas are also on track to make it in the top-20 if they get 3-4 more seasons like their 2017 and 2018 seasons. Dez Bryant still has a glimmer of hope, but he'd need 3 more above average seasons and he's trending down so it seems unlikely.<br /><br />Other active players with a long tenure that I thought might have had a chance that just don't have good enough scores are:<br /><ul><li>DeSean Jackson (-0.52...he's actually been worse than average)</li><li>A.J Green (0.13...hasn't had a good enough season since 2013 so odds seem low)</li><li>Demaryius Thomas (0.26)</li></ul><br /><h4>Best Season Rankings</h4></div><div>According to the stat score here, the best single seasons by a wide receiver in the modern era were:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>Player</th><th>Year</th><th>Stat Score</th></tr><tr><td>Jerry Rice</td><td>1990</td><td>2.68</td></tr><tr><td>John Jefferson</td><td>1980</td><td>2.46</td></tr><tr><td>Jerry Rice</td><td>1987</td><td>2.45</td></tr><tr><td>Cliff Branch</td><td>1974</td><td>2.36</td></tr><tr><td>Marvin Harrison</td><td>2002</td><td>2.28</td></tr></tbody></table><br /></div><div>The best season for a still-active wide receiver was Antonio Brown's 2017 season (22nd all-time).<br /><br /></div><h4>Methodology</h4><div>What does it mean to be the best? I settled on two criteria for wide receivers:</div><div><ul><li><b>statistical performance:</b> how much of a statistical outlier was this player?</li><li><b>awards:</b> how did the media and fans rank this player against his peers?</li></ul><div><a href="http://www.somesolvedproblems.com/p/sports.html">For statistical performance, I used the methodology described here</a>. I used the top 36 wide receiver and tight ends in each season, used 6 years for a wide receivers's prime, and used the following weighting:<br /><ul><li>receptions per game (0.25)</li><li>receiving yards per game (0.35)</li><li>total touchdowns per game (0.35)</li><li>fumbles per game (-0.05)</li></ul><div>The resulting score is roughly 'number of standard deviations above his peers during his prime.' Just how good was Jerry Rice? Here is a comparison between him and Randy Moss. Note that these plots have no scores for some seasons as the player was ineligible due to injury, too little production, etc.<br /><br /><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-qPBQohNsr3M/XFEVTeNFLEI/AAAAAAAAEvc/gy4MtNZf9dY6aeqYWvYnj2TSdEhMehFYACLcBGAs/s1600/jerry%2Brice.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://3.bp.blogspot.com/-qPBQohNsr3M/XFEVTeNFLEI/AAAAAAAAEvc/gy4MtNZf9dY6aeqYWvYnj2TSdEhMehFYACLcBGAs/s1600/jerry%2Brice.png" width="90%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-m9obNSonNFI/XFEWCcrOnVI/AAAAAAAAEvo/LrEf7EtqbnoA9xIsyrenYrXNg1g4eAjBACLcBGAs/s1600/Randy%2BMoss.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1084" height="auto" src="https://3.bp.blogspot.com/-m9obNSonNFI/XFEWCcrOnVI/AAAAAAAAEvo/LrEf7EtqbnoA9xIsyrenYrXNg1g4eAjBACLcBGAs/s1600/Randy%2BMoss.png" width="90%" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"></div><br /><br />Jerry Rice was incredible. His prime is unmatched. He lost a step towards the end of his career, but since this takes the best 6 seasons, that doesn't hurt his score. Randy Moss's legendary 2003 and 2007 seasons are about average for Rice during his prime. Rice led the league in receiving yards 6 times. No other receiver in the modern era has led it more than twice.<br /><br />It's not like there were no other good wide receivers during his prime either. Remembering that this is a measure of how much better he was than his peers, here are some of his peers in the 1990 season:<br /><ul><li>James Lofton - HOF</li><li>Chris Carter - HOF</li><li>Andre Reed - HOF</li><li>Art Monk - HOF</li><li>Andre Rison</li><li>Sterling Sharpe</li></ul>The gap between him and those elite peers was enormous. He actually won the MVP award as a wide receiver!<br /><br />What about active receivers...how do Julio Jones ,Larry Fitzgerald, and Antonio Brown compare?<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-EMXhOjBR884/XFEWh91i9rI/AAAAAAAAEv0/oHj_2M73Ljk74TWOXEGlhZ6frDcY-xBMACLcBGAs/s1600/active.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="563" data-original-width="1079" height="auto" src="https://1.bp.blogspot.com/-EMXhOjBR884/XFEWh91i9rI/AAAAAAAAEv0/oHj_2M73Ljk74TWOXEGlhZ6frDcY-xBMACLcBGAs/s1600/active.png" width="90%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><br />All three of them have been well above average for large portions of their careers. Brown's peak is just higher. Julio's lack of tds is the biggest difference between them so far. I could remove it from the stat score, but it seems like tds and yards are the primary metrics for a wide receiver.</div><div><br /></div><div>For awards, I considered only pro bowl and first-team all-pro voting. The award scores are simple. If a player was sent to the pro bowl, he gets 0.083 added to his score for that season. If he was named a first team all-pro, he gets 0.167 added to his score for that season. A wide receiver who was sent to 6 pro bowls and named a first-team all-pro in 6 seasons during his prime will get a perfect award score of 0.25.</div><div><br /><h4>Conclusion</h4></div></div></div><div>I really like this rough methodology for comparing players across eras. If you have any feedback, suggestions, etc., let me know in the comments.</div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-61777998957127577702019-01-31T19:36:00.000-08:002019-01-31T19:36:20.618-08:00China and India Have Huge Populations<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/plotly.js/1.44.1/plotly.min.js"></script>I had an idea for visualizing just how many people live in India and China.<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-dEzyZ-fDfqQ/XFOqJGpAnHI/AAAAAAAAEwE/bw65RnSiRfgol_0Xuz3oL8xy-mMzo5N_wCLcBGAs/s1600/gif%2Bcaptured%2Bno%2Bloop.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="703" data-original-width="1600" height="auto" src="https://2.bp.blogspot.com/-dEzyZ-fDfqQ/XFOqJGpAnHI/AAAAAAAAEwE/bw65RnSiRfgol_0Xuz3oL8xy-mMzo5N_wCLcBGAs/s1600/gif%2Bcaptured%2Bno%2Bloop.gif" width="0%" /></a></div><a name='more'></a>The code below picks countries at random until their total population is greater than the combined populations of India and China. It takes an impressively long time for it to stop. I really liked this method for showing it. Since it is random, refreshing the page will give you a different set of countries.<br /><br /><div style="display: grid; grid-template-columns: 9fr 2fr; margin-left: -30px; margin-right: -30px;"><div id="map" style="display: grid; height: 480px; width: 100%;"></div><div id="bar" style="display: grid; height: 480px; width: 100%;"></div></div><div id="countries" style="font-size: 14px; margin-left: auto; margin-right: auto; width: 100%;"></div><br /><script>let data = [ { "CODE": "AND", "NAME": "Andorra", "POP": "84,000" }, { "CODE": "ARE", "NAME": "United Arab Emirates", "POP": "4,975,593" }, { "CODE": "AFG", "NAME": "Afghanistan", "POP": "29,121,286" }, { "CODE": "ATG", "NAME": "Antigua and Barbuda", "POP": "86,754" }, { "CODE": "AIA", "NAME": "Anguilla", "POP": "13,254" }, { "CODE": "ALB", "NAME": "Albania", "POP": "2,986,952" }, { "CODE": "ARM", "NAME": "Armenia", "POP": "2,968,000" }, { "CODE": "ANT", "NAME": "Netherlands Antilles", "POP": "300,000" }, { "CODE": "AGO", "NAME": "Angola", "POP": "13,068,161" }, { "CODE": "ATA", "NAME": "Antarctica", "POP": 0 }, { "CODE": "ARG", "NAME": "Argentina", "POP": "41,343,201" }, { "CODE": "ASM", "NAME": "American Samoa", "POP": "57,881" }, { "CODE": "AUT", "NAME": "Austria", "POP": "8,205,000" }, { "CODE": "AUS", "NAME": "Australia", "POP": "21,515,754" }, { "CODE": "ABW", "NAME": "Aruba", "POP": "71,566" }, { "CODE": "ALA", "NAME": "Aland", "POP": "26,711" }, { "CODE": "AZE", "NAME": "Azerbaijan", "POP": "8,303,512" }, { "CODE": "BIH", "NAME": "Bosnia and Herzegovina", "POP": "4,590,000" }, { "CODE": "BRB", "NAME": "Barbados", "POP": "285,653" }, { "CODE": "BGD", "NAME": "Bangladesh", "POP": "156,118,464" }, { "CODE": "BEL", "NAME": "Belgium", "POP": "10,403,000" }, { "CODE": "BFA", "NAME": "Burkina Faso", "POP": "16,241,811" }, { "CODE": "BGR", "NAME": "Bulgaria", "POP": "7,148,785" }, { "CODE": "BHR", "NAME": "Bahrain", "POP": "738,004" }, { "CODE": "BDI", "NAME": "Burundi", "POP": "9,863,117" }, { "CODE": "BEN", "NAME": "Benin", "POP": "9,056,010" }, { "CODE": "BLM", "NAME": "Saint Barthelemy", "POP": "8,450" }, { "CODE": "BMU", "NAME": "Bermuda", "POP": "65,365" }, { "CODE": "BRN", "NAME": "Brunei", "POP": "395,027" }, { "CODE": "BOL", "NAME": "Bolivia", "POP": "9,947,418" }, { "CODE": "BES", "NAME": "Bonaire", "POP": "18,012" }, { "CODE": "BRA", "NAME": "Brazil", "POP": "201,103,330" }, { "CODE": "BHS", "NAME": "Bahamas", "POP": "301,790" }, { "CODE": "BTN", "NAME": "Bhutan", "POP": "699,847" }, { "CODE": "BVT", "NAME": "Bouvet Island", "POP": 0 }, { "CODE": "BWA", "NAME": "Botswana", "POP": "2,029,307" }, { "CODE": "BLR", "NAME": "Belarus", "POP": "9,685,000" }, { "CODE": "BLZ", "NAME": "Belize", "POP": "314,522" }, { "CODE": "CAN", "NAME": "Canada", "POP": "33,679,000" }, { "CODE": "CCK", "NAME": "Cocos [Keeling] Islands", "POP": 628 }, { "CODE": "COD", "NAME": "Democratic Republic of the Congo", "POP": "70,916,439" }, { "CODE": "CAF", "NAME": "Central African Republic", "POP": "4,844,927" }, { "CODE": "COG", "NAME": "Republic of the Congo", "POP": "3,039,126" }, { "CODE": "CHE", "NAME": "Switzerland", "POP": "8,484,100" }, { "CODE": "CIV", "NAME": "Ivory Coast", "POP": "21,058,798" }, { "CODE": "COK", "NAME": "Cook Islands", "POP": "21,388" }, { "CODE": "CHL", "NAME": "Chile", "POP": "16,746,491" }, { "CODE": "CMR", "NAME": "Cameroon", "POP": "19,294,149" }, { "CODE": "CHN", "NAME": "China", "POP": "1,330,044,000" }, { "CODE": "COL", "NAME": "Colombia", "POP": "47,790,000" }, { "CODE": "CRI", "NAME": "Costa Rica", "POP": "4,516,220" }, { "CODE": "SCG", "NAME": "Serbia and Montenegro", "POP": "10,829,175" }, { "CODE": "CUB", "NAME": "Cuba", "POP": "11,423,000" }, { "CODE": "CPV", "NAME": "Cape Verde", "POP": "508,659" }, { "CODE": "CUW", "NAME": "Curacao", "POP": "141,766" }, { "CODE": "CXR", "NAME": "Christmas Island", "POP": "1,500" }, { "CODE": "CYP", "NAME": "Cyprus", "POP": "1,102,677" }, { "CODE": "CZE", "NAME": "Czechia", "POP": "10,476,000" }, { "CODE": "DEU", "NAME": "Germany", "POP": "81,802,257" }, { "CODE": "DJI", "NAME": "Djibouti", "POP": "740,528" }, { "CODE": "DNK", "NAME": "Denmark", "POP": "5,484,000" }, { "CODE": "DMA", "NAME": "Dominica", "POP": "72,813" }, { "CODE": "DOM", "NAME": "Dominican Republic", "POP": "9,823,821" }, { "CODE": "DZA", "NAME": "Algeria", "POP": "34,586,184" }, { "CODE": "ECU", "NAME": "Ecuador", "POP": "14,790,608" }, { "CODE": "EST", "NAME": "Estonia", "POP": "1,291,170" }, { "CODE": "EGY", "NAME": "Egypt", "POP": "80,471,869" }, { "CODE": "ESH", "NAME": "Western Sahara", "POP": "273,008" }, { "CODE": "ERI", "NAME": "Eritrea", "POP": "5,792,984" }, { "CODE": "ESP", "NAME": "Spain", "POP": "46,505,963" }, { "CODE": "ETH", "NAME": "Ethiopia", "POP": "88,013,491" }, { "CODE": "FIN", "NAME": "Finland", "POP": "5,244,000" }, { "CODE": "FJI", "NAME": "Fiji", "POP": "875,983" }, { "CODE": "FLK", "NAME": "Falkland Islands", "POP": "2,638" }, { "CODE": "FSM", "NAME": "Micronesia", "POP": "107,708" }, { "CODE": "FRO", "NAME": "Faroe Islands", "POP": "48,228" }, { "CODE": "FRA", "NAME": "France", "POP": "64,768,389" }, { "CODE": "GAB", "NAME": "Gabon", "POP": "1,545,255" }, { "CODE": "GBR", "NAME": "United Kingdom", "POP": "62,348,447" }, { "CODE": "GRD", "NAME": "Grenada", "POP": "107,818" }, { "CODE": "GEO", "NAME": "Georgia", "POP": "4,630,000" }, { "CODE": "GUF", "NAME": "French Guiana", "POP": "195,506" }, { "CODE": "GGY", "NAME": "Guernsey", "POP": "65,228" }, { "CODE": "GHA", "NAME": "Ghana", "POP": "24,339,838" }, { "CODE": "GIB", "NAME": "Gibraltar", "POP": "27,884" }, { "CODE": "GRL", "NAME": "Greenland", "POP": "56,375" }, { "CODE": "GMB", "NAME": "Gambia", "POP": "1,593,256" }, { "CODE": "GIN", "NAME": "Guinea", "POP": "10,324,025" }, { "CODE": "GLP", "NAME": "Guadeloupe", "POP": "443,000" }, { "CODE": "GNQ", "NAME": "Equatorial Guinea", "POP": "1,014,999" }, { "CODE": "GRC", "NAME": "Greece", "POP": "11,000,000" }, { "CODE": "SGS", "NAME": "South Georgia and the South Sandwich Islands", "POP": 30 }, { "CODE": "GTM", "NAME": "Guatemala", "POP": "13,550,440" }, { "CODE": "GUM", "NAME": "Guam", "POP": "159,358" }, { "CODE": "GNB", "NAME": "Guinea-Bissau", "POP": "1,565,126" }, { "CODE": "GUY", "NAME": "Guyana", "POP": "748,486" }, { "CODE": "HKG", "NAME": "Hong Kong", "POP": "6,898,686" }, { "CODE": "HMD", "NAME": "Heard Island and McDonald Islands", "POP": 0 }, { "CODE": "HND", "NAME": "Honduras", "POP": "7,989,415" }, { "CODE": "HRV", "NAME": "Croatia", "POP": "4,284,889" }, { "CODE": "HTI", "NAME": "Haiti", "POP": "9,648,924" }, { "CODE": "HUN", "NAME": "Hungary", "POP": "9,982,000" }, { "CODE": "IDN", "NAME": "Indonesia", "POP": "242,968,342" }, { "CODE": "IRL", "NAME": "Ireland", "POP": "4,622,917" }, { "CODE": "ISR", "NAME": "Israel", "POP": "7,353,985" }, { "CODE": "IMN", "NAME": "Isle of Man", "POP": "75,049" }, { "CODE": "IND", "NAME": "India", "POP": "1,173,108,018" }, { "CODE": "IOT", "NAME": "British Indian Ocean Territory", "POP": "4,000" }, { "CODE": "IRQ", "NAME": "Iraq", "POP": "29,671,605" }, { "CODE": "IRN", "NAME": "Iran", "POP": "76,923,300" }, { "CODE": "ISL", "NAME": "Iceland", "POP": "308,910" }, { "CODE": "ITA", "NAME": "Italy", "POP": "60,340,328" }, { "CODE": "JEY", "NAME": "Jersey", "POP": "90,812" }, { "CODE": "JAM", "NAME": "Jamaica", "POP": "2,847,232" }, { "CODE": "JOR", "NAME": "Jordan", "POP": "6,407,085" }, { "CODE": "JPN", "NAME": "Japan", "POP": "127,288,000" }, { "CODE": "KEN", "NAME": "Kenya", "POP": "40,046,566" }, { "CODE": "KGZ", "NAME": "Kyrgyzstan", "POP": "5,776,500" }, { "CODE": "KHM", "NAME": "Cambodia", "POP": "14,453,680" }, { "CODE": "KIR", "NAME": "Kiribati", "POP": "92,533" }, { "CODE": "COM", "NAME": "Comoros", "POP": "773,407" }, { "CODE": "KNA", "NAME": "Saint Kitts and Nevis", "POP": "51,134" }, { "CODE": "PRK", "NAME": "North Korea", "POP": "22,912,177" }, { "CODE": "KOR", "NAME": "South Korea", "POP": "48,422,644" }, { "CODE": "KWT", "NAME": "Kuwait", "POP": "2,789,132" }, { "CODE": "CYM", "NAME": "Cayman Islands", "POP": "44,270" }, { "CODE": "KAZ", "NAME": "Kazakhstan", "POP": "15,340,000" }, { "CODE": "LAO", "NAME": "Laos", "POP": "6,368,162" }, { "CODE": "LBN", "NAME": "Lebanon", "POP": "4,125,247" }, { "CODE": "LCA", "NAME": "Saint Lucia", "POP": "160,922" }, { "CODE": "LIE", "NAME": "Liechtenstein", "POP": "35,000" }, { "CODE": "LKA", "NAME": "Sri Lanka", "POP": "21,513,990" }, { "CODE": "LBR", "NAME": "Liberia", "POP": "3,685,076" }, { "CODE": "LSO", "NAME": "Lesotho", "POP": "1,919,552" }, { "CODE": "LTU", "NAME": "Lithuania", "POP": "2,944,459" }, { "CODE": "LUX", "NAME": "Luxembourg", "POP": "497,538" }, { "CODE": "LVA", "NAME": "Latvia", "POP": "2,217,969" }, { "CODE": "LBY", "NAME": "Libya", "POP": "6,461,454" }, { "CODE": "MAR", "NAME": "Morocco", "POP": "33,848,242" }, { "CODE": "MCO", "NAME": "Monaco", "POP": "32,965" }, { "CODE": "MDA", "NAME": "Moldova", "POP": "4,324,000" }, { "CODE": "MNE", "NAME": "Montenegro", "POP": "666,730" }, { "CODE": "MAF", "NAME": "Saint Martin", "POP": "35,925" }, { "CODE": "MDG", "NAME": "Madagascar", "POP": "21,281,844" }, { "CODE": "MHL", "NAME": "Marshall Islands", "POP": "65,859" }, { "CODE": "MKD", "NAME": "Macedonia", "POP": "2,062,294" }, { "CODE": "MLI", "NAME": "Mali", "POP": "13,796,354" }, { "CODE": "MMR", "NAME": "Myanmar [Burma]", "POP": "53,414,374" }, { "CODE": "MNG", "NAME": "Mongolia", "POP": "3,086,918" }, { "CODE": "MAC", "NAME": "Macao", "POP": "449,198" }, { "CODE": "MNP", "NAME": "Northern Mariana Islands", "POP": "53,883" }, { "CODE": "MTQ", "NAME": "Martinique", "POP": "432,900" }, { "CODE": "MRT", "NAME": "Mauritania", "POP": "3,205,060" }, { "CODE": "MSR", "NAME": "Montserrat", "POP": "9,341" }, { "CODE": "MLT", "NAME": "Malta", "POP": "403,000" }, { "CODE": "MUS", "NAME": "Mauritius", "POP": "1,294,104" }, { "CODE": "MDV", "NAME": "Maldives", "POP": "395,650" }, { "CODE": "MWI", "NAME": "Malawi", "POP": "15,447,500" }, { "CODE": "MEX", "NAME": "Mexico", "POP": "112,468,855" }, { "CODE": "MYS", "NAME": "Malaysia", "POP": "28,274,729" }, { "CODE": "MOZ", "NAME": "Mozambique", "POP": "22,061,451" }, { "CODE": "NAM", "NAME": "Namibia", "POP": "2,128,471" }, { "CODE": "NCL", "NAME": "New Caledonia", "POP": "216,494" }, { "CODE": "NER", "NAME": "Niger", "POP": "15,878,271" }, { "CODE": "NFK", "NAME": "Norfolk Island", "POP": "1,828" }, { "CODE": "NGA", "NAME": "Nigeria", "POP": "154,000,000" }, { "CODE": "NIC", "NAME": "Nicaragua", "POP": "5,995,928" }, { "CODE": "NLD", "NAME": "Netherlands", "POP": "16,645,000" }, { "CODE": "NOR", "NAME": "Norway", "POP": "5,009,150" }, { "CODE": "NPL", "NAME": "Nepal", "POP": "28,951,852" }, { "CODE": "NRU", "NAME": "Nauru", "POP": "10,065" }, { "CODE": "NIU", "NAME": "Niue", "POP": "2,166" }, { "CODE": "NZL", "NAME": "New Zealand", "POP": "4,252,277" }, { "CODE": "OMN", "NAME": "Oman", "POP": "2,967,717" }, { "CODE": "PAN", "NAME": "Panama", "POP": "3,410,676" }, { "CODE": "PER", "NAME": "Peru", "POP": "29,907,003" }, { "CODE": "PYF", "NAME": "French Polynesia", "POP": "270,485" }, { "CODE": "PNG", "NAME": "Papua New Guinea", "POP": "6,064,515" }, { "CODE": "PHL", "NAME": "Philippines", "POP": "99,900,177" }, { "CODE": "PAK", "NAME": "Pakistan", "POP": "184,404,791" }, { "CODE": "POL", "NAME": "Poland", "POP": "38,500,000" }, { "CODE": "SPM", "NAME": "Saint Pierre and Miquelon", "POP": "7,012" }, { "CODE": "PCN", "NAME": "Pitcairn Islands", "POP": 46 }, { "CODE": "PRI", "NAME": "Puerto Rico", "POP": "3,916,632" }, { "CODE": "PSE", "NAME": "Palestine", "POP": "3,800,000" }, { "CODE": "PRT", "NAME": "Portugal", "POP": "10,676,000" }, { "CODE": "PLW", "NAME": "Palau", "POP": "19,907" }, { "CODE": "PRY", "NAME": "Paraguay", "POP": "6,375,830" }, { "CODE": "QAT", "NAME": "Qatar", "POP": "840,926" }, { "CODE": "REU", "NAME": "Reunion", "POP": "776,948" }, { "CODE": "ROU", "NAME": "Romania", "POP": "21,959,278" }, { "CODE": "SRB", "NAME": "Serbia", "POP": "7,344,847" }, { "CODE": "RUS", "NAME": "Russia", "POP": "140,702,000" }, { "CODE": "RWA", "NAME": "Rwanda", "POP": "11,055,976" }, { "CODE": "SAU", "NAME": "Saudi Arabia", "POP": "25,731,776" }, { "CODE": "SLB", "NAME": "Solomon Islands", "POP": "559,198" }, { "CODE": "SYC", "NAME": "Seychelles", "POP": "88,340" }, { "CODE": "SDN", "NAME": "Sudan", "POP": "35,000,000" }, { "CODE": "SWE", "NAME": "Sweden", "POP": "9,828,655" }, { "CODE": "SGP", "NAME": "Singapore", "POP": "4,701,069" }, { "CODE": "SHN", "NAME": "Saint Helena", "POP": "7,460" }, { "CODE": "SVN", "NAME": "Slovenia", "POP": "2,007,000" }, { "CODE": "SJM", "NAME": "Svalbard and Jan Mayen", "POP": "2,550" }, { "CODE": "SVK", "NAME": "Slovakia", "POP": "5,455,000" }, { "CODE": "SLE", "NAME": "Sierra Leone", "POP": "5,245,695" }, { "CODE": "SMR", "NAME": "San Marino", "POP": "31,477" }, { "CODE": "SEN", "NAME": "Senegal", "POP": "12,323,252" }, { "CODE": "SOM", "NAME": "Somalia", "POP": "10,112,453" }, { "CODE": "SUR", "NAME": "Suriname", "POP": "492,829" }, { "CODE": "SSD", "NAME": "South Sudan", "POP": "8,260,490" }, { "CODE": "STP", "NAME": "Sao Tome and Principe", "POP": "197,700" }, { "CODE": "SLV", "NAME": "El Salvador", "POP": "6,052,064" }, { "CODE": "SXM", "NAME": "Sint Maarten", "POP": "37,429" }, { "CODE": "SYR", "NAME": "Syria", "POP": "22,198,110" }, { "CODE": "SWZ", "NAME": "Swaziland", "POP": "1,354,051" }, { "CODE": "TCA", "NAME": "Turks and Caicos Islands", "POP": "20,556" }, { "CODE": "TCD", "NAME": "Chad", "POP": "10,543,464" }, { "CODE": "ATF", "NAME": "French Southern Territories", "POP": 140 }, { "CODE": "TGO", "NAME": "Togo", "POP": "6,587,239" }, { "CODE": "THA", "NAME": "Thailand", "POP": "67,089,500" }, { "CODE": "TJK", "NAME": "Tajikistan", "POP": "7,487,489" }, { "CODE": "TKL", "NAME": "Tokelau", "POP": "1,466" }, { "CODE": "TLS", "NAME": "East Timor", "POP": "1,154,625" }, { "CODE": "TKM", "NAME": "Turkmenistan", "POP": "4,940,916" }, { "CODE": "TUN", "NAME": "Tunisia", "POP": "10,589,025" }, { "CODE": "TON", "NAME": "Tonga", "POP": "122,580" }, { "CODE": "TUR", "NAME": "Turkey", "POP": "77,804,122" }, { "CODE": "TTO", "NAME": "Trinidad and Tobago", "POP": "1,328,019" }, { "CODE": "TUV", "NAME": "Tuvalu", "POP": "10,472" }, { "CODE": "TWN", "NAME": "Taiwan", "POP": "22,894,384" }, { "CODE": "TZA", "NAME": "Tanzania", "POP": "41,892,895" }, { "CODE": "UKR", "NAME": "Ukraine", "POP": "45,415,596" }, { "CODE": "UGA", "NAME": "Uganda", "POP": "33,398,682" }, { "CODE": "UMI", "NAME": "U.S. Minor Outlying Islands", "POP": 0 }, { "CODE": "USA", "NAME": "United States", "POP": "310,232,863" }, { "CODE": "URY", "NAME": "Uruguay", "POP": "3,477,000" }, { "CODE": "UZB", "NAME": "Uzbekistan", "POP": "27,865,738" }, { "CODE": "VAT", "NAME": "Vatican City", "POP": 921 }, { "CODE": "VCT", "NAME": "Saint Vincent and the Grenadines", "POP": "104,217" }, { "CODE": "VEN", "NAME": "Venezuela", "POP": "27,223,228" }, { "CODE": "VGB", "NAME": "British Virgin Islands", "POP": "21,730" }, { "CODE": "VIR", "NAME": "U.S. Virgin Islands", "POP": "108,708" }, { "CODE": "VNM", "NAME": "Vietnam", "POP": "89,571,130" }, { "CODE": "VUT", "NAME": "Vanuatu", "POP": "221,552" }, { "CODE": "WLF", "NAME": "Wallis and Futuna", "POP": "16,025" }, { "CODE": "WSM", "NAME": "Samoa", "POP": "192,001" }, { "CODE": "XKX", "NAME": "Kosovo", "POP": "1,800,000" }, { "CODE": "YEM", "NAME": "Yemen", "POP": "23,495,361" }, { "CODE": "MYT", "NAME": "Mayotte", "POP": "159,042" }, { "CODE": "ZAF", "NAME": "South Africa", "POP": "49,000,000" }, { "CODE": "ZMB", "NAME": "Zambia", "POP": "13,460,305" }, { "CODE": "ZWE", "NAME": "Zimbabwe", "POP": "13,061,000" } ]; codes = []; names = []; pops = []; actpops = []; namestext = []; for (let i = 0; i < data.length; i++) { codes.push(data[i]['CODE']); names.push(data[i]['NAME'] + ': ' + data[i]['POP']); namestext.push(data[i]['NAME']); try { actpops.push(Number(data[i]['POP'].replace(/,/g,''))); } catch (ex) {actpops.push(-1);} if ((data[i]['CODE'] == 'IND') || (data[i]['CODE'] == 'CHN')) { pops.push(1); } else { pops.push(0); } } let layout = { geo: { lataxis: { 'range': [-60,70] }, resolution: 75 }, margin: { l: 5, r: 5, t: 0, b: 0, pad: 0 } } let i = 0; let cumpop = 0; used = {'CHN': 0, 'IND': 0}; var interval = setInterval(function() { let index = Math.floor(Math.random() * codes.length); while (used[codes[index]] !== undefined) { index = Math.floor(Math.random() * codes.length); } used[codes[index]] = 0; pops[index] = -1; cumpop += actpops[index]; if (i++ === 0) { $('#countries')[0].innerText = namestext[index]; } else { $('#countries')[0].innerText = $('#countries')[0].innerText + ', ' + namestext[index]; } let trace = { type: 'choropleth', locations: codes, z: pops, text: names, showscale: false, hoverinfo: 'text' } Plotly.newPlot('map', [trace], layout, {displayModeBar: false}); Plotly.react('bar', [{x: ['China + <br>India'], y: [2503152018], type: 'bar', name: 'China + India', marker: {color:['rgb(178, 10, 28)']}},{x: ['Others'], y:[cumpop], type: 'bar', name: 'Others', marker: {color: ['rgb(5, 10, 172)']}}], {showlegend: false, yaxis: { title: 'combined population' }, margin: { t: 25, b: 35, l: 50, r: 5, pad: 0 }}, {displayModeBar: false}); if (cumpop > 2503152018) { clearInterval(interval);} }, 500); </script>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-81607618278946117692019-01-26T09:40:00.000-08:002019-02-03T22:22:23.875-08:00Should Football Teams Go for Two When Losing by 14 Points?If your team is down by 14 and scores a touchdown, should they kick an extra point or go for two?<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-wFfaHYdSfqA/XEU0SH72N0I/AAAAAAAAEtg/trM_Df8QAFE2VIKT4djofWOcpcKm4KergCLcBGAs/s1600/50per%2BOT%2Bcontour.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="586" data-original-width="991" height="auto" src="https://3.bp.blogspot.com/-wFfaHYdSfqA/XEU0SH72N0I/AAAAAAAAEtg/trM_Df8QAFE2VIKT4djofWOcpcKm4KergCLcBGAs/s1600/50per%2BOT%2Bcontour.png" width="0%" /></a></div><a name='more'></a>The three possible outcomes in regulation are:<br /><ul><li>score less than 14 pts and lose</li><li>score 14 pts and go to overtime (OT)</li><li>score more than 14 points and win</li></ul><div>There are three possible strategies for getting at least 14 points, so let's examine each one's win probability.</div><div><br /></div><h4>Summary of results if you just want the answer</h4><div>If you are thinking of maybe going for two to win, always do that on the first td instead of the second one. The decision to go for two over kicking an extra point is not so clear and depends on the probability of making an extra point, making a two-point conversion, and winning in OT. <a href="https://docs.google.com/spreadsheets/d/1iKsjEGOc47zienRrmfKQgoioDGkbNT7pl0yOvT9Tf1Y/edit?usp=sharing">I added a spreadsheet to play with the numbers and see the results and it's here</a>.</div><br />Throughout, let p1 be the probability of an extra point attempt succeeding and p2 be the probability of a two-point attempt succeeding. Call pOT the probability of winning in overtime.<br /><br /><h4>Strategy 1</h4><div><ul><li>kick an extra point</li><li>if you fail, go for two on the next td</li><li>if you succeed, kick an extra point on the next td</li></ul><div>Consider the outcomes:<br /><br /><div style="text-align: center;"><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>path</th><th>probability</th><th>points</th></tr><tr><td>fail on first; fail on second</td><td>(1 - p1)*(1 - p2)</td><td>12</td></tr><tr><td>fail on first; succeed on second</td><td>(1 - p1)*(p2)</td><td>14</td></tr><tr><td>succeed on first; fail on second</td><td>(p1)*(1 - p1)</td><td>13</td></tr><tr><td>succeed on first; succeed on second</td><td>(p1)*(p1)</td><td>14</td></tr></tbody></table><br /></div>Converting those point totals to results, that's lose, OT, lose, OT. The probability of winning in OT is pOT, so summing the non-loss ones up and multiplying the OT ones by pOT, the probability of winning with strategy 1 is:<br /><br />pOT*[(1 - p1)*(p2) + (p1)*(p1)]<br /><br /></div></div><h4>Strategy 2</h4><div><ul><li>go for two right away</li><li>if you fail, go for two again on the next td</li><li>if you succeed, kick an extra point on the next td</li></ul><div>Consider the outcomes:<br /><br /><div style="text-align: center;"><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>path</th><th>probability</th><th>points</th></tr><tr><td>fail on first; fail on second</td><td>(1 - p2)*(1 - p2)</td><td>12</td></tr><tr><td>fail on first; succeed on second</td><td>(1 - p2)*(p2)</td><td>14</td></tr><tr><td>succeed on first; fail on second</td><td>(p2)*(1 - p1)</td><td>14</td></tr><tr><td>succeed on first; succeed on second</td><td>(p2)*(p1)</td><td>15</td></tr></tbody></table><br /></div>Converting those point totals to results, that's lose, OT, OT, win. The probability of winning in OT is pOT, so summing the non-loss ones up and multiplying the OT ones by pOT, the probability of winning with strategy 2 is:<br /><br />pOT*(1 - p2)*p2 + pOT*(1 - p1)*p2 + (p1)*p2<br /><br /><div><br /><h4>Strategy 3</h4><div><ul><li>kick an extra point</li><li>go for two no matter what after the next td</li></ul><div>Consider the outcomes:<br /><br /><div style="text-align: center;"><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>path</th><th>probability</th><th>points</th></tr><tr><td>fail on first; fail on second</td><td>(1 - p1)*(1 - p2)</td><td>12</td></tr><tr><td>fail on first; succeed on second</td><td>(1 - p1)*(p2)</td><td>14</td></tr><tr><td>succeed on first; fail on second</td><td>(p1)*(1 - p2)</td><td>13</td></tr><tr><td>succeed on first; succeed on second</td><td>(p1)*(p2)</td><td>15</td></tr></tbody></table><br /></div>Converting those point totals to results, that's lose, OT, lose, win. The probability of winning in OT is pOT, so summing the non-loss ones up and multiplying the OT ones by pOT, the probability of winning with strategy 3 is:<br /><br />pOT*(1 - p1)*(p2) + (p1)*(p2)<br /><br /><h4>Which strategy is best?</h4></div></div><div>First...look at the equations for strategies 2 and 3:<br /><br />pOT*(1 - p2)*p2 + pOT*(1 - p1)*p2 + (p1)*p2 for #2</div><div>pOT*(1 - p1)*(p2) + (p1)*(p2) for #3</div><div><br /></div><div>Those are identical except that strategy 2 has an additional pOT*(1 - p2)*p2 term. Since p1, p2, and pOT are all between 0 and 1 (since they're probabilities), that value is always greater than or equal to zero. That is, strategy 2's win percentage is always the same as or better than strategy 3's. You should never use strategy 3.</div><div><br /></div><div>What about strategy 1 vs strategy 2? That one depends on your specific probabilities. I don't know of a great way to visualize this since there are enough variables to make it really confusing, so I put together a spreadsheet that you can play with. <a href="https://docs.google.com/spreadsheets/d/1iKsjEGOc47zienRrmfKQgoioDGkbNT7pl0yOvT9Tf1Y/edit?usp=sharing">It is here (same link as the one at the beginning)</a>.<br /><br />I have it set by default to the NFL numbers I was able to find (48% for 2 pt conversion, 95% for extra point). I have no idea how to estimate odds of winning OT, so I set that to 50%. Presumably that varies by team quality, fatigue, home field advantage, etc., so it would be a game-time call. With those numbers, the success percentages of the strategies are:</div><div><ul><li>46% for strategy 1</li><li>59% for strategy 2</li><li>47% for strategy 3</li></ul></div><div>To try to see the whole picture at once, contour plots are handy. Here are three examples. This is the win percentage gained by choosing strategy 2 over strategy 1 for all possibilities of extra point and 2-pt conversion success rates. The three examples here are three different assumptions for your chances in OT...30%, 50%, and 70%. The gains are represented as multipliers. That is, '3x' here means that choosing strategy 2 makes you 3 times as likely to win as choosing strategy 1 does. '0.33x' means that choosing strategy 1 makes you 3 times as likely to win as choosing strategy 2 does. Greater than 1 means choose strategy 2. Less than 1 means choose strategy 1. Exactly '1x' means the two strategies are equally likely to work.</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-twnxa_M1Lqs/XEU0SEcffwI/AAAAAAAAEtk/QepIcM5wqKsSx65Ztw89xQyTIgMzi2VmgCLcBGAs/s1600/30per%2BOT%2Bcontour.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="586" data-original-width="991" height="auto" src="https://3.bp.blogspot.com/-twnxa_M1Lqs/XEU0SEcffwI/AAAAAAAAEtk/QepIcM5wqKsSx65Ztw89xQyTIgMzi2VmgCLcBGAs/s1600/30per%2BOT%2Bcontour.png" width="90%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-wFfaHYdSfqA/XEU0SH72N0I/AAAAAAAAEtg/trM_Df8QAFE2VIKT4djofWOcpcKm4KergCLcBGAs/s1600/50per%2BOT%2Bcontour.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="586" data-original-width="991" height="auto" src="https://3.bp.blogspot.com/-wFfaHYdSfqA/XEU0SH72N0I/AAAAAAAAEtg/trM_Df8QAFE2VIKT4djofWOcpcKm4KergCLcBGAs/s1600/50per%2BOT%2Bcontour.png" width="90%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-Ax2yKqKDMso/XEU0SAvTB_I/AAAAAAAAEtc/VCsnI0Pn7OMj_QeCaLSdnZppfHUif3rEQCLcBGAs/s1600/70per%2BOT%2Bcontour.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="586" data-original-width="991" height="auto" src="https://4.bp.blogspot.com/-Ax2yKqKDMso/XEU0SAvTB_I/AAAAAAAAEtc/VCsnI0Pn7OMj_QeCaLSdnZppfHUif3rEQCLcBGAs/s1600/70per%2BOT%2Bcontour.png" width="90%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div>You can clearly see that strategy 1 becomes more attractive if you feel that you are likely to win if it goes to OT, but strategy 2 is more attractive overall if your 2 point conversion % is in the 40-50% range or better.<br /><br />Zooming into the most likely region in my opinion, here are the numeric results when the probability of winning in OT is 50%. The table is ordered with 'success % for extra points' as the columns and 'success % for 2-pt conversions' as the rows. The numbers are again the increase in win % by choosing strategy 2 instead of strategy 1.<br /><br /><div style="text-align: center;"><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th></th><th>75.0%</th><th>80.0%</th><th>85.0%</th><th>90.0%</th><th>95.0%</th><th>100.0%</th></tr><tr><th style="background-color: #e6a117;">75.0%</th><td>2.00x</td><td>1.95x</td><td>1.89x</td><td>1.82x</td><td>1.76x</td><td>1.69x</td></tr><tr><th style="background-color: #e6a117;">65.0%</th><td>1.88x</td><td>1.81x</td><td>1.74x</td><td>1.67x</td><td>1.60x</td><td>1.53x</td></tr><tr><th style="background-color: #e6a117;">55.0%</th><td>1.73x</td><td>1.65x</td><td>1.57x</td><td>1.49x</td><td>1.42x</td><td>1.35x</td></tr><tr><th style="background-color: #e6a117;">45.0%</th><td>1.53x</td><td>1.45x</td><td>1.37x</td><td>1.29x</td><td>1.22x</td><td>1.15x</td></tr><tr><th style="background-color: #e6a117;">35.0%</th><td>1.29x</td><td>1.21x</td><td>1.13x</td><td>1.06x</td><td>0.99x</td><td>0.93x</td></tr><tr><th style="background-color: #e6a117;">25.0%</th><td>1.00x</td><td>0.92x</td><td>0.86x</td><td>0.79x</td><td>0.74x</td><td>0.69x</td></tr></tbody></table></div><br />If you want to play with the plotting, <a href="https://colab.research.google.com/drive/1J5TLp4VOu3YV-L7ezBfpjS9s9JzJsBUz">I used python for it and here is a link to my notebook</a>.<br /><br /><br /></div></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-50502453838374288332019-01-22T20:23:00.000-08:002019-01-24T07:12:28.387-08:00How Do Marginal Tax Rates Work?If you get a raise and move up a tax bracket, do you actually lose money? Does a 70% tax rate on the highest earners mean that you'll lose 70% of your income to taxes? <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/--q7AGrVMRuk/XEfq6r3SbVI/AAAAAAAAEuA/yarr_rqMmlsJcIH2WqxuYOKHsWsG1W6bwCEwYBhgL/s1600/effective%2Btax%2Brate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="progressive taxes" border="0" data-original-height="643" data-original-width="1074" height="auto" src="https://4.bp.blogspot.com/--q7AGrVMRuk/XEfq6r3SbVI/AAAAAAAAEuA/yarr_rqMmlsJcIH2WqxuYOKHsWsG1W6bwCEwYBhgL/s1600/effective%2Btax%2Brate.png" title="marginal tax rates" width="0%" /></a></div><a name='more'></a>You can find tax brackets easily online. <a href="https://taxfoundation.org/2018-tax-brackets/">Here's a link</a>. The rate per additional dollar earned in each bracket is its 'marginal tax rate'. How do you convert that into actual lost income or an actual effective tax rate? What if we added another bracket with a marginal rate of 70% for incomes greater than $10 million?<br /><br />A simple way to understand it is to plot it. Here is a plot of post-tax income vs taxable income for an unmarried filer taking only the standard deduction:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-t9ByWcVCnfw/XEfq3CK9rrI/AAAAAAAAEt8/49Yz1NCTnvEMBMK5kzPpbkjODl9HaWBPwCLcBGAs/s1600/take%2Bhome%2Bpay.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="643" data-original-width="1131" height="auto" src="https://4.bp.blogspot.com/-t9ByWcVCnfw/XEfq3CK9rrI/AAAAAAAAEt8/49Yz1NCTnvEMBMK5kzPpbkjODl9HaWBPwCLcBGAs/s1600/take%2Bhome%2Bpay.png" width="90%" /></a></div><br /><br />Notice how as you move to the right, the post-tax income always goes up? The rate that it increases changes, but your post-tax income never goes down. You don't lose any money by moving up a bracket with a tax setup like this. You might earn too much for certain benefits and that can be a problem, but that's not related to the marginal tax rate.<br /><br />Here is a plot of the effective tax rate vs taxable income for the same filer:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/--q7AGrVMRuk/XEfq6r3SbVI/AAAAAAAAEuA/yarr_rqMmlsJcIH2WqxuYOKHsWsG1W6bwCEwYBhgL/s1600/effective%2Btax%2Brate.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="progressive taxes" border="0" data-original-height="643" data-original-width="1074" height="auto" src="https://4.bp.blogspot.com/--q7AGrVMRuk/XEfq6r3SbVI/AAAAAAAAEuA/yarr_rqMmlsJcIH2WqxuYOKHsWsG1W6bwCEwYBhgL/s1600/effective%2Btax%2Brate.png" title="marginal tax rates" width="90%" /></a></div><br />Notice how the effective tax rate is never quite as high as the marginal tax rate? If you're in the 32% bracket, you don't actually lose 32% of your income to federal tax.<br /><br />In reality, you'll pay lower federal rates than this. There are other deductions and tax credits. Capital gains and dividends are also taxed at lower rates. That doesn't change the general idea though. Whatever your marginal rate is, your effective federal tax rate will always be below it, and will often be way below it.<br /><br />Hopefully this clears up any misconceptions you might have heard similar to the opening questions. If any remain, just let me know in the comments.<br /><br /><br />theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com8tag:blogger.com,1999:blog-1532419805701836386.post-82631647816488083152019-01-18T19:06:00.001-08:002019-01-18T19:50:56.078-08:00Handling Outliers in Linear RegressionWhat are some of the techniques for handling outliers in linear regression, and how do they compare? I evaluate several in Python. <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-O4cVg1h36zo/XEKQ9oD_X9I/AAAAAAAAEr8/_jymKdXTpYUBSPnE1B1TtiX_SzmjZ3rGQCLcBGAs/s1600/outliers%2Bin%2Bsame%2Bdirection.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="535" data-original-width="881" height="auto" src="https://4.bp.blogspot.com/-O4cVg1h36zo/XEKQ9oD_X9I/AAAAAAAAEr8/_jymKdXTpYUBSPnE1B1TtiX_SzmjZ3rGQCLcBGAs/s1600/outliers%2Bin%2Bsame%2Bdirection.png" width="0%" /></a></div><a name='more'></a><h4>Methods</h4><div>I'm taking sample data with a few different types of outliers, and calculating the slope and intercept using the following methods:<br /><ul><li>linear regression</li><li><a href="https://en.wikipedia.org/wiki/Theil%E2%80%93Sen_estimator">Theil-Sen estimator</a></li><li><a href="https://en.wikipedia.org/wiki/Random_sample_consensus">RANSAC method</a></li><li><a href="https://en.wikipedia.org/wiki/Least_trimmed_squares">least trimmed squares method (LTS)</a></li></ul><div><a href="https://colab.research.google.com/drive/1KYwsGKszLkVxaM7CcnY7-OWk90Qu3OVl">I used python for all of this, and you can see and work with the code here</a>. </div><div><br /></div><div>For all but LTS, I used <a href="https://pypi.org/project/scikit-learn/">scikit-learn</a>. I wrote a custom implementation of LTS because I could not find a nice one when I searched. The LTS algorithm is basically:<br /><ol><li>randomly sample 60% of the points, perform simple linear regression on them, and repeat 20 times</li><li>keep the sample from step 1 that gave you the best score</li><li>replace a point in the sample with another point from the original pool of data, perform simple linear regression, and calculate the score; if it improved, keep the newpoint; repeat a bunch of times</li></ol><h4>Results</h4></div></div><div>I used three outlier types:</div><div><ol><li>20% of points are all way-off in the same direction</li><li>20% of points have large, random errors added to them</li><li>1 point is massively off; error is 50x the total scale of the data</li></ol><div>Overall, simple linear regression resulted in noticeable errors for all three outlier types. All three of the other methods worked well, and LTS and Theil-Sen gave the best results for this specific data set and outlier type.</div></div><div><br /></div><div>With an outlier free slope of 1 and intercept of 0, these are the results:</div><div><br /></div><div style="text-align: center;"><h4 style="text-align: center;">outliers in one direction</h4><div><br /></div></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-O4cVg1h36zo/XEKQ9oD_X9I/AAAAAAAAEr8/_jymKdXTpYUBSPnE1B1TtiX_SzmjZ3rGQCLcBGAs/s1600/outliers%2Bin%2Bsame%2Bdirection.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="ransac regression example" border="0" data-original-height="535" data-original-width="881" height="auto" src="https://4.bp.blogspot.com/-O4cVg1h36zo/XEKQ9oD_X9I/AAAAAAAAEr8/_jymKdXTpYUBSPnE1B1TtiX_SzmjZ3rGQCLcBGAs/s1600/outliers%2Bin%2Bsame%2Bdirection.png" title="linear regression with outliers" width="90%" /></a></div><div><br /></div><div style="text-align: center;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;">ideal slope: 1 ideal intercept: 0 simple linear regression slope: 0.647 simple linear regression intercept: -1.503 RANSAC slope: 1.03 RANSAC intercept: -2.132 Theil-Sen estimator slope: 0.999 Theil-Sen intercept: -0.004 least trimmed squares slope: 1.0 least trimmed squares intercept: -0.003</span><br /><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span></div><div style="text-align: center;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span></div><h4 style="text-align: center;">random outliers</h4><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-1FwPDM5IGnY/XEKRXdUU5xI/AAAAAAAAEsE/I2OxeUhmEiYM30L9YSmslIeB5m_C5okhACLcBGAs/s1600/random%2Boutliers.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="theil-sen estimator example" border="0" data-original-height="535" data-original-width="881" height="auto" src="https://2.bp.blogspot.com/-1FwPDM5IGnY/XEKRXdUU5xI/AAAAAAAAEsE/I2OxeUhmEiYM30L9YSmslIeB5m_C5okhACLcBGAs/s1600/random%2Boutliers.png" title="handle outliers with regression" width="90%" /></a></div><div><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span></div><div style="text-align: center;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;">ideal slope: 1 ideal intercept: 0 simple linear regression slope: 0.639 simple linear regression intercept: 8.915 RANSAC slope: 0.997 RANSAC intercept: -0.111 Theil-Sen estimator slope: 1.0 Theil-Sen intercept: 0.006 least trimmed squares slope: 1.0 least trimmed squares intercept: -0.004</span><br /><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span><br /><h4 style="text-align: center;">one big outlier</h4><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-sVAIqfCUG6s/XEKR-0-XxUI/AAAAAAAAEsY/5ydU1XnvFzQcQZ8kGSQNjMi-JcQPCKOVQCLcBGAs/s1600/one%2Blarge%2Boutlier.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img alt="least trimmed squares regression example" border="0" data-original-height="535" data-original-width="886" height="auto" src="https://4.bp.blogspot.com/-sVAIqfCUG6s/XEKR-0-XxUI/AAAAAAAAEsY/5ydU1XnvFzQcQZ8kGSQNjMi-JcQPCKOVQCLcBGAs/s1600/one%2Blarge%2Boutlier.png" title="robust regression methods" width="90%" /></a></div><div style="text-align: center;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span></div><div style="text-align: center;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;">ideal slope: 1 ideal intercept: 0 simple linear regression slope: 0.97 simple linear regression intercept: -48.996 RANSAC slope: 1.0 RANSAC intercept: 0.004 Theil-Sen estimator slope: 1.0 Theil-Sen intercept: -0.001 least trimmed squares slope: 1.0 least trimmed squares intercept: 0.001</span><br /><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span><br /><div style="text-align: left;"><span style="background-color: white; color: #212121; font-family: monospace; font-size: 14px; white-space: pre;"><br /></span></div></div></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-64723604636253860142019-01-13T15:42:00.000-08:002019-01-29T17:01:15.769-08:00Who Was the Best Running Back in Modern NFL History?Here is my attempt at answering this question...<br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-kdYcmDFlEuA/XDvLRA3noDI/AAAAAAAAErU/wD48uNfTqwQJjDG9MbPuLz46NGZPyqcKgCLcBGAs/s1600/Barry%2BSanders.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="706" data-original-width="1125" height="auto" src="https://3.bp.blogspot.com/-kdYcmDFlEuA/XDvLRA3noDI/AAAAAAAAErU/wD48uNfTqwQJjDG9MbPuLz46NGZPyqcKgCLcBGAs/s1600/Barry%2BSanders.png" width="0%" /></a></div><div><a name='more'></a><h4>Best Career Rankings</h4></div><div>The best way to interpret these rankings are that they answer the following question: 'Which running back's prime was strongest when compared with his direct peers?'</div><div><br /></div><div><table style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr style="background-color: #e6a117;"><th>Player</th><th>Stat Score</th><th>Award Score</th><th>Total Score</th></tr><tr><td>Barry Sanders</td><td>1.74</td><td>0.25</td><td>1.99</td></tr><tr><td>Emmitt Smith</td><td>1.49</td><td>0.22</td><td>1.71</td></tr><tr><td>Marshall Faulk</td><td>1.16</td><td>0.21</td><td>1.37</td></tr><tr><td>Walter Payton</td><td>1.11</td><td>0.24</td><td>1.34</td></tr><tr><td>Eric Dickerson</td><td>1.07</td><td>0.24</td><td>1.31</td></tr><tr><td>Adrian Peterson</td><td>1.08</td><td>0.22</td><td>1.31</td></tr><tr><td>LaDainian Tomlinson</td><td>1.12</td><td>0.18</td><td>1.30</td></tr><tr><td>OJ Simpson</td><td>0.96</td><td>0.24</td><td>1.20</td></tr></tbody></table><br /><b><i>**Important note...this is only in the modern era, so Jim Brown is excluded. This is because I cannot find a reliable way to compare players in those seasons since schedules, parity, etc. were not what they are today**</i></b><br /><b><i><br /></i></b>The methodology used here requires at least 6 prime years for a player (see below). Some new players that might make the list are not included. Among current players, Todd Gurley is the best and has a stat score of 1.40. Ezekiel Elliott is second and he is not on track to make the top as his stat score is 0.81, so he would need really strong performances going forward.<br /><br />Other active players that are close but didn't make it and likely won't given their ages are LeSean McCoy with a 0.73 stat score and Marshawn Lynch with a 0.36. Interestingly...if the prime period is shortened to 4 years, Jamaal Charles makes the list with a 1.17.<br /><br /><h4>Best Season Rankings</h4></div><div>According to the stat score here, the best single seasons by a running back in the modern era were:<br /><br /></div><div><table style="margin-left: auto; margin-right: auto;"><tbody><tr style="background-color: #e6a117;"><th>Player</th><th>Year</th><th>Stat Score</th></tr><tr><td>OJ Simpson</td><td>1975</td><td>2.59</td></tr><tr><td>Walter Payton</td><td>1977</td><td>2.55</td></tr><tr><td>Terrell Davis</td><td>1998</td><td>2.34</td></tr><tr><td>Marshall Faulk</td><td>2000</td><td>2.31</td></tr><tr><td>Emmitt Smith</td><td>1995</td><td>2.29</td></tr></tbody></table><br /></div><div>The best season for a still-active running back was Adrian Peterson's 2012 season (12th all-time).<br /><br /></div><h4>Methodology</h4><div>What does it mean to be the best? I settled on two criteria for running backs:</div><div><ul><li><b>statistical performance:</b> how much of a statistical outlier was this player?</li><li><b>awards:</b> how did the media and fans rank this player against his peers?</li></ul><div><a href="http://www.somesolvedproblems.com/p/sports.html">For statistical performance, I used the methodology described here</a>. I used the top 24 running backs in each season, used 6 years for a running back's prime, and used the following weighting:<br /><ul><li>total yards per game (0.125)</li><li>rushing yards per game (0.125)</li><li>total touchdowns per game (0.30)</li><li>rushing yards per attempt (0.40)</li><li>fumbles per game (-0.05)</li></ul><div>The resulting score is roughly 'number of standard deviations above his peers during his prime.' Singling out the top 2, here are their careers as an example:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-kdYcmDFlEuA/XDvLRA3noDI/AAAAAAAAErU/wD48uNfTqwQJjDG9MbPuLz46NGZPyqcKgCLcBGAs/s1600/Barry%2BSanders.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="706" data-original-width="1125" height="auto" src="https://3.bp.blogspot.com/-kdYcmDFlEuA/XDvLRA3noDI/AAAAAAAAErU/wD48uNfTqwQJjDG9MbPuLz46NGZPyqcKgCLcBGAs/s1600/Barry%2BSanders.png" width="80%" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-takRq3ObGe0/XDvLURPsWhI/AAAAAAAAErY/b18Q4RZTeAIusqf-3PBa95_C2CgZ_h9RQCLcBGAs/s1600/Emmitt%2BSmith.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="696" data-original-width="1122" height="auto" src="https://3.bp.blogspot.com/-takRq3ObGe0/XDvLURPsWhI/AAAAAAAAErY/b18Q4RZTeAIusqf-3PBa95_C2CgZ_h9RQCLcBGAs/s1600/Emmitt%2BSmith.png" width="80%" /></a></div><br />Barry Sanders was incredible while he played, but he retired early. He would be hurt if this score was based on total stats. Emmitt Smith had a great prime, then tapered off late in his career. He would be hurt if this score was based on average performance over time. Since this score is a measure of how good your prime was, it captures the best 6 seasons for each of these players and only compares those.</div><div><br /></div><div>For awards, I considered only pro bowl and first-team all-pro voting. I considered including MVP also, but there is strong bias in that one. Since there is not a positional one each season, the MVP voting can be biased by era. In the past decade, it is much more quarterback-heavy than it was in previous decades. Are running backs significantly worse now, are quarterbacks significantly better, or did the rules and perception in the league change? That's all very unclear, so it's a bad metric for this in my opinion.</div><div><br /></div><div>The award scores are simple. If a player was sent to the pro bowl, he gets 0.083 added to his score for that season. If he was named a first team all-pro, he gets 0.167 added to his score for that season. A running back who was sent to 6 pro bowls and named a first-team all-pro in 6 seasons during his prime will get a perfect award score of 0.25.<br /><br /><h4>Conclusion</h4></div></div></div><div>I really like this rough methodology for comparing players across eras. If you have any feedback, suggestions, etc., let me know in the comments.</div><div><br /></div><div><br /></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0tag:blogger.com,1999:blog-1532419805701836386.post-69686252158219973032019-01-04T23:42:00.003-08:002019-01-06T08:46:40.081-08:00Historic S&P 500 Variability<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/plotly.js/1.43.1/plotly.min.js"></script>The news often covers 'major' swings in stock prices. How rare are these large swings? <br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-Ow4Dl_VvGd8/XDDuuOuGI6I/AAAAAAAAEqY/e8iReKR7yF4LbCFDZcGHPcYUzwER-OIyQCLcBGAs/s1600/sp%2B500.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="503" data-original-width="1093" height="auto" src="https://2.bp.blogspot.com/-Ow4Dl_VvGd8/XDDuuOuGI6I/AAAAAAAAEqY/e8iReKR7yF4LbCFDZcGHPcYUzwER-OIyQCLcBGAs/s1600/sp%2B500.png" width="0%" /></a></div><a name='more'></a>Below is an interactive plot of the daily returns for the S&P 500 since 1950:<br /><br /><div id="plot" style="margin-left: auto; margin-right: auto; width: 100%;"></div><br />It's clear that some big swings have happened. October 1987 and October 2008 are particularly crazy. The years kind of run together there, so this is the same thing with the points grouped by year: <br /><br /><div id="box" style="margin-left: auto; margin-right: auto; width: 100%;"></div><br />75% of the days in each year fell within the colored box for that year, and the bars extend to the max and min daily returns in that year. Looking at it this way, it's clear that the volatility in 2018 was larger than most of the past few years, but it was not historically large. 2018 was comparable to 2015 and much less volatile than years like 2008, 2009, and 2011.<br /><br />It's hard to tell how often a 1.5% gain or 2% drop happens though. A histogram would help out here:<br /><br /><div id="hist" style="height: 400px; margin-left: auto; margin-right: auto; width: 100%;"></div><br />Now you can simply read how often a given daily return happens. That still requires looking at a plot and doing some mental math though. Wouldn't it be easier if I just wrote a tool that let you enter a return and see how rare it is? <br /><br />Here is a simple tool that lets you enter a return and see how rare it is. Just enter a daily return and it will print out where it is in the distribution.<br /><br /><div style="margin-left: auto; margin-right: auto;"><div style="text-align: center;">Enter the daily return <input id="percent" /><br /><br /><div id="returnInfo" style="font-size: 20px;"></div></div><br /><br /><script>let vals = $.getJSON('https://api.myjson.com/bins/kfe1c', function(data) { let x = []; let y = []; let boxData = []; for (let i = 0; i < data.length; i++) { x.push(new Date(data[i].d)); y.push(data[i].v * 100); if (boxData[x[i].getFullYear()] === undefined) { boxData[x[i].getFullYear()] = []; } boxData[x[i].getFullYear()].push(data[i].v * 100); } let trace = { x: x, y: y, type: 'scattergl', mode: 'lines' } let boxTraces = []; for (let i = 1950; i < 2019; i++) { boxTraces.push({ x: i, y: boxData[i], type: 'box', name: i, boxpoints: false }); } let hist = { x: y, autobinx: false, type: 'histogram', histnorm: 'probability', xbins: { start: -10, size: .1, end: 10 } } var layout = { yaxis: { title: '% daily return' }, font: { family: 'Calibri', size: 14, color: '#242128' }, margin: { r: 10 }, title: { text: 'S&P 500 Daily Returns', font: { size: 24 } } } var boxLayout = { showlegend: false, yaxis: { title: '% daily return' }, font: { family: 'Calibri', size: 14, color: '#242128' }, margin: { r: 10 }, title: { text: 'Annual Distributions of S&P 500 Daily Returns', font: { size: 24 } } } var histLayout = { yaxis: { tickvals:[0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07], ticktext: ['0', '1', '2', '3', '4', '5', '6', '7'], title: '% of days', range: [0, .08] }, xaxis: { title: '% daily return' }, font: { family: 'Calibri', size: 14, color: '#242128' }, margin: { r: 10, b: 35 }, title: { text: 'Distribution of Daily S&P 500 Returns', font: { size: 22 } } } Plotly.react('plot', [trace], layout, {responsive: true}); Plotly.react('box', boxTraces, boxLayout, {responsive: true}); Plotly.react('hist', [hist], histLayout, {responsive: true}); }); $('#percent').on('change', function() { val = $(this).val() / 100; let count = 0; let lastDate = ''; for (let i = 0; i < vals.responseJSON.length; i++) { if(val > vals.responseJSON[i].v) { count++; lastDate = vals.responseJSON[i].d; } } $('#returnInfo').html('<b>' + (100*(1 - (count / vals.responseJSON.length))).toFixed(3).toString() + '% of days since 1950 had better S&P returns than this</b>'); }); </script></div>theboathttp://www.blogger.com/profile/01260139398901806725noreply@blogger.com0