Why Skylake CPUs Are Sometimes 50% Slower – How Intel Has Broken Existing Code

Super interesting take on how broken code+repackaging of a bug as a ‘feature’ slows down Intel processors, from Skylake and up!

Alois Kraus

I got a call that on newer hardware some performance regression tests have become slower. Not a big deal. Usually it is a bad configuration somewhere in Windows or some BIOS settings were set to non optimal values. But this time we were not able to find a setting that did bring performance back to normal. Since the change was not small 9s vs 19s (blue is old hardware orange is new hardware) we needed to drill deeper:


Same OS, Same Hardware, Different CPU – 2 Times Slower

A perf drop from 9,1s to 19,6s is definitely significant. We did more checks if the software version under test, Windows, BIOS settings were somehow different from the old baseline hardware. But nope everything was identical. The only difference was that the same tests were running on different CPUs. Below is a picture of the newest CPU


And here is the one…

View original post 2,563 more words

Information deluge and plummeting attention-spans

O.K., this is not going to be one of those pre-planned and well structured posts; I’m just going to share whatever’s on my mind right now, and that happens to be a topic that’s super personal to me (and which frankly should be to everyone with a smartphone/internet connected device) — information overload.

Our neural synapses, over the course of the preceding million or so years, evolved to process vast amounts of information on survival related stuff (archetypal prey heuristics, location of food and water, and visual data on poisonous plants, … for example) over almost a life-time. And, a life-time, even 10,000 years ago, was measured in decades — around 4 on average, to be precise — which was a plenty of time for memory consolidation processes to kick in and solidify the learned information/behavior.

Here’s the problem — 10,000 years + Agricultural Revolution, Industrial Revolution, Information Revolution, and gargantuan amounts of Globalization later, our brains find themselves in a place they didn’t evolve to be in, and this causes a number of incompatibilities that wreck havoc on some of the most innate aspects of our species without which we pretty much lose our identities. I’m in particular talking about ATTENTION, or the ability of an individual to expend the it in a meaningful manner. As my rants across years have proved, I am incredibly uncomfortable with the notion of tech industry changing (or hijacking rather) our brains by taking advantage of its plasticity, its hard-wired longings for specific elements (roots of which can be traced back to our evolutionary history) in the environment, and of course, by giving up those super sweet incremental dopamine hits, thereby keeping the ‘attention economy’ chugging.

Here’s a great video talking about what I just wrote (produced by ‘Epipheo’, featuring the dude who wrote the book I reviewed earlier):


As you may have guessed, this is a serious situation. Our brains have been conditioned so well to the internet that many folks are having a hard time disconnecting from it, and this in turn has spawned a bunch of new industries.

Even long time tech insiders, folks who stand to benefit the most from all those ‘eyes glued to the screens,’ have spoken out against their former employers, accusing them of deliberately making their devices and service more addictive (by incessantly incorporating what’s called, “Intermittent Rewards“) in order to increase their quarterly results — not exactly in the illegal territory, but certainly unethical on part of these billion-dollar corporations to pretty much go Pavlovian on us … ffs those fuckers.

But wait, before you throw in the towel and declare defeat, there surely are some silver-linings to the story — Google recently unveiled their plans to enhance “Digital Well-being” of its users on the next version of Android by adding a dashboard to track app usage time, with friendly reminders and grey-scaling option to boot in case your dopamine circuits don’t co-operate. Two prominent Apple investors wrote to the company asking it to adopt more stringent measures to tackle the issue of children being hooked on to their devices and services (except their shitty maps; no one uses Apple Maps). Think-tanks like this are springing up all over the place to spread awareness about how damaging tech can be to not just our mental health, but also physical well-being.

Before I wrap this post up, here are trusted resources if you’re helplessly glued to that slab of glass:

That’s about it, until next time.


If you wish to support my work, you can do so here — THANK YOU! <3

On the explosion of Artificial Intelligence!

Okay, so this is the first ‘tech’ post I’m publishing after a super long time. The last time I published an in-depth ‘tech’ post was this one, almost a year back, in September 2016, which is an absolute shame because I am, after all, a CS undergrad!

Anyway, without further ado, let me jump straight in…

So, I was just lazing around on YouTube, looking for stuff to sedate my synapses with, and I managed to land on this video — the Jobs and Gates interview back in ‘oh-so-far-away’ year of 2007.

The most striking thing about the interview, aside from the historic convergence of two of the most brilliant minds in tech industry (veterans, if you will), was their predictions for the next decade. And, since I was sitting in the year 2017, I had to absolute luxury of critiquing their predictions in hindsight.

One thing stood out during for me out of the entire interview, that was the part where they talk about A.I. (Bill especially goes deeper into the topic than Steve). Although the discussion was still mostly focused on the form-factor of the future devices, it was fascinating to hear about these visionaries’ foresight into the A.I. and the industry in general, especially in the hindsight.

So that got me thinking…

I mean, we keep hearing about A.I. all the fucking time in the news. Everyone’s talking about it. I’ve heard people talking about it in libraries, offices, public transportation and even in fucking washrooms (so, what if an A.I. could automatically flush down my poop once it detects that my bowels are empty). There’s like an A.I. article published by nearly by every major newspaper at least once a month, conveniently timing it with some random ass statement of the ‘ultra-rad’ Elon Musk on the state of A.I.s’ in general.

But, how much do we really know about A.I.?!

I suppose not much. Most of the KNAWLEDGE on A.I. is generally sourced from NEWS ARTICLES (even for me, as a CS undergrad, which is a shame). And news articles are mostly written by journalism and liberal arts majors who aren’t exactly great with things involving technology, and I bet half of them don’t even understand at the depth necessary to appreciate it. And I’m not the only one saying it.

So yeah, in order to understand the state of the A.I. industry and academia, I hit up the A.I. research pages of major tech firms, and I WAS BLOWN AWAY BY THE RAPID ADVANCEMENT. For instance:

  • Facebook’s pouring in MASSIVE amounts of $s’ into it’s A.I. research program christened ‘Facebook A.I. Research (FAIR)’ which in all fairness (pun intended) sounds like a pretty good name. Their mission statement talks about using A.I. to make the world ‘more connected’. Check out this piece talking about their research on advancing their A.I.’s vision game.
  • Google’s making insane advances in the field of A.I. (which is not surprising). I mean, their depth of research could very well span over a couple dozen pages. They are researching on applying A.I. on everything from Self-driving cars to Personalized assistant (aka yo’ Google assistant on 7.0.x and above), to augmented reality. And in fact, you can INTERACT with one of their A.I. right from the device you’re reading this on, RIGHT NOW. Check out ‘Quick, Draw!’, a game where a supervised A.I. predicts what you’re trying to draw based on previous data sets. It’s cool.
  • Tesla’s been gathering a SHIT-TONNE of data from it’s massive fleet of Models’ S, 3 and X (S3X for short) and that in-turn has made it’s ‘Autopilot’ A.I. incredible better. It’s now almost as good as a human driver, and as the time goes on, it’s probabilistic rate of crash is only going to decrease exponentially. I mean, check out this video and see for yourself.
  • SpaceX has been making steady progress on landing their first stage booster on the droneship over the past couple years now, ever since their first propulsive landing on LZ-1 on that historic day in December 2015 [I watched it live :’)]. And if you want to see the progress for yourself, I think a good place to start would be THIS video which essentially is a montage of all the ‘failed’ landing attempts that culminated in the success of Dec 2015 mission (Orbcomm OG-2). That’s probably the best and observable example of Machine Learning at work.

And much much more… I haven’t even mentioned Microsoft’s expansive A.I. division yet, or even Amazon’s. You get my point. This is one of the most exciting fields out there, no question; Companies know it. Investors know it.

Anyway, it’s almost 2:00am here in New Delhi and it looks like I might have to end this abruptly, but my point is that A.I. has been so intertwined in our lives that we’re starting to take it for granted.

I think we should sometimes take a step back and try reading up on the latest academic papers/publications in the field. Most of them will blow your mind for sure.

And of course, Media and pop-culture in general aren’t exactly the best guide for understanding A.I. There’s so much stuff that’s happening, and most of them behind closed doors that we, the average Joes’ will never get to see, until we seek.

Until next time.

P. S. Working on a super long article, and will most probably go live in a couple week-ish! <3