Episode 7: Education Cleavage

Subscribe in a reader

Transcript of Audio:

Welcome back to Three Deviations Out!  Last week we talked about the distributed internet and how to cut the ultimate cord.  This week we’ll cover the very real and distinct split in our education system.  I will warn you this week was pretty hectic so the episode is short, because education is already a massive topic I will be revisiting at a later date, don’t worry.

Now, before I get all riled up, let’s cover the basics:

What: The failure and subsequent splintering of education ideologies across the country as we all try to figure out how to break away from our antiquated system that will ruin us in only a few years.

Who: United States Department of Education, Common Core Advocates, Charter School Advocates, Teachers’ Unions, Private Schools, Non-Profit Education Initiatives, the Montessori Schools, F-infotech, Eduventures, Codeacademy, Duolingo, Lumosity, IBM, P-TECH,

Why:  In just a few years there will come a cleavage point in our society where the majority will not be able to get jobs with the skills they learned in high school or college, and those already in the workforce will have their jobs automated.  The choice will be to either upskill or give up, fall back on the state.  Unfortunately, at the scale expected our country will not be able to support the number of individuals whose skills will not fit into the new economy.  When that happens, we need to have a fallback plan to ensure the ability to keep going, to keep innovating.

83.3% of full time, first time postsecondary students received some sort of federal financial aid in the 2014 – 2015 school year, up from 75% ten years earlier. Overall, 38.3% of undergraduate students received financial aid in 2014 – 2015 with the average loan of $6,831 annually resulting in $27,324 of total debt in the expected 4-year graduation.  In the graduation year of 2009, though, only 53.8% of students nationally graduated their 4-year program within 6 years of the start date. As for high school, over the course of 4 years an average of 3.6% of students drop out nationally, with the majority (4.8% overall) dropping out in their senior year. In 2015 the United States fell just on the average of OECD countries in reading and science, and was lower than the average in mathematics.  As a country we are also investing less over time in primary and tertiary education as other OECD countries spend more, with steady declines in investment rates since 2009 though since then the economy has only grown.  Our country’s teachers are overworked and underpaid when compared to others and have minimal time to lesson plan and give feedback to students because they are consistently in the front of the classroom, teaching.  That’d be like expecting me to have a six-hour blog post ready to go each day, all prepped and researched by myself.  I could maybe do it for a week.  If that.

So, what we get out of that ramble is what many of us already know.  Our country’s education system is broken.  It’s been broken for a number of years.  However, with the technological changes making their way to the mainstream right now and a major socioeconomic shift in how we perceive what a ‘job’ is just over the horizon, something’s going to break.  Estimates suggest that 38% of Americans could lose their job to automation over the next 15 years with the biggest hits suffered in routine tasks such as manufacturing, administration, transportation, and logistics.  Using August’s job numbers, that is the equivalent of nearly 60,000 people across the country losing their jobs with no chance of receiving another position in their field.  That’s a population the size of Flagstaff, Arizona.

Today’s theory: A technological revolution will upend the socioeconomic system we’ve been contentedly dealing with for the past century or so, and we are not prepared for that.

What we’ll cover today:

  • Informal Education, including Public/Private Partnerships and Technological Reform
  • Use Cases
  • Things You Should Absolutely Know
  • The Idealized World in Amanda’s Head

Informal Education:

Education has increasingly become an intrinsically motivated institution.  From Massive Open Online Courses (MOOCs) to online universities, high skill apprenticeships, technology incubators, and education apps, online or informal education is a growing trend.  Because of easy access, with cost and other barriers to entry removed many are understanding that a traditional formal college education is not necessarily providing the skills a person needs to succeed in the workplace or in society after graduation.

Use Cases:

As a mature and widespread market, education has a wide variety of use cases I could choose from to talk about here.  Because of that, I’ve chosen just one to focus on for each of the categories I covered just a minute ago.  The real-world examples I will be focusing on are: BASIS Scottsdale, Onondaga Community College, P-TECH and IBM, and Coursera.

BASIS Scottsdale:

The number one charter school in the country started in 2003 with 138 students.  The goal of the school is to merge STEM ideals with a liberal arts education, idealizing the merging of the fuzzy and the techie.  While focusing on traditional STEM and liberal arts subjects, there is a high emphasis on time management and organization.  For 5 years the organization has been listed in the Washington Post’s ‘Top Performing Schools with Elite Students’.  The option to go charter is a difficult one that all families have to make at an individual level.  A student may have more availability to resources and be more challenged than in a traditional public-school setting, but in turn may also have a more focused academic path chosen for them instead of being able to choose it themselves.  This school has proven its clout though and this and the other BASIS charter schools are making this style of education very enticing.

Community College

Community colleges are on the rise.  42% of undergraduates were enrolled in a community college in fall 2014.  Compared to the lofty price of a public or private four-year college, community college tuition ranges from just under $1,500 (California) to just over $7,500 (Vermont).  And depending on which state you live in, a community college will give you the same quality of education and access to resources as a full-time university while offering more flexibility and less expense.  Many community colleges offer partnerships with full-time universities to ensure credits transfer if you decide to move from the two-year program to the four year program, and business partnership allow these institutions access to internships and apprenticeships that may not be interested in those working toward a full-time degree.


There are 56 P-TECH schools in the country.  P-TECH stands for Pathways in Technology Early College High School, and these schools span from grades 9 to 14, instead of the traditional 9 – 12 model.  IBM has developed these schools to allow students an opportunity for a no cost associate’s degree in a subject area that has been deemed business necessary.  The focus is in STEM but also incorporates other necessary workplace skills like communication and business analysis.  Students are focused on a path toward a career from day one of 9th grade and can easily see the path to graduation with an accredited degree, removing the stressors of college application and tuition.  This public/private partnership allows for a different path than the traditional high school, and is offering an interesting use case study on what tailoring education to business needs actually accomplishes.


Coursera is a great meeting of the altruistic and the capitalistic minded.  The site does a great service; it provides high quality education through lecture, tutorials, class discussions, and interactive exams all available online.  Mostly these courses are free, or some aspect of them is free.  But in instances where someone wants to have a bit more recognition for what they’ve done there is the ability to ‘purchase’ a class and receive a certificate at the end indicating they’ve obtained that skill.  This allows for the more traditional university feel, with a type of degree received at the end, with a much lower price and more flexible hours than available traditionally.  Coursera has offerings spanning from deep learning to linguistics to art history and software development.  There are tutors available, study groups within each course, and access to top tier academic scholars.  Right now Coursera partners with 145 post-secondary institutions, has 25 million active users, over 2,000 courses, and four full-length degrees.

Things You Should Absolutely Know:

It’s hard to guess what skills we humans will need to have in the next decade or so.  With technology moving as quickly as it is I and we can make a lot of assumptions but really, it’s all up in the air until it actually happens.  There are a few things that are always key when interacting with one another and moving our society forward.  First of all, the ability to simply interact is key.  Communicating effectively and being able to effectively understand others when they’re trying to communicate with you is a skill that will never go away, and will become more important as many of us work in more collaborative settings today than we would have 10 years ago.  Another key skill will be the ability to access and assess our emotions in a positive way.  This goes along with communication but also ties in creativity and out of the box thinking.  By access and assess our emotions I don’t mean get all touchy feely all the time.  What I mean is the ability to be in a situation, check yourself, and make sure your reaction is appropriate to what is going on around you.  For example, if you are giving an important and widely watched speech, don’t allow yourself to get worked up into an angry fit and call people names.  Again, as we get more collaborative in our work spaces and communities, situational awareness and empathy are key skills.  Lastly, everyone should know how to use a computer.  I’m not saying you need to be able to code in C++ or hack into classified documents.  Just know the basics, and be willing to learn more as you do more in your online life.  Maybe take a Coursera on web development or html, or do some reading on the history of the semiconductor.  Tech is pervasive now, there’s no escaping it.  So know something about it.

The Idealized World:

There’s a realization we have to face as a society, and we have to do it quickly.  That is, we don’t know what skills will be needed in the job market in 10 years.  The majority of us are just making wild guesses while a few very impressive humans are making more educated predictions.  Especially with robotics and AI automation potentially able to more successfully complete a lot of tasks that make up entire current industries, we need to approach education differently.  Instead of the outcome of education being a specific job we should consider the types of traits and skills we want to see in our overall society, both professionally and socially.  We need to consider what it means to be uniquely human, and what kind of humans we want to mean.  And this especially means that education, either through formal or informal channels, does not stop at adolescence.  Learning new skills and being introduced to new ideas as a lifelong endeavor needs to become the norm, not the exception.  As we move to a less ‘job focused’ socioeconomic system we need to change our habits and preconceptions or risk falling into the proverbial pit.

Thanks for joining me for another episode of Three Deviations Out.  I hope you enjoyed it.  Leave comments, concerns, questions and arguments below, and follow me on Twitter @greaterthanxbar.  Next week we have a special treat, Jen Hamel will be joining us as Three Deviations Out’s first guest!  Follow her on Twitter @jenhameltbr and join us next week to talk about Artificial Intelligence.  The Robots are Here!



Episode 5: Data Storage Revolution

Subscribe in a reader

Orders of Magnitude

Transcript of Audio:

Hello and welcome to another episode of Three Deviations Out. My name is Amanda and I think for a living.  Last week we talked about millennials and the technology that shapes our lives.  This week we’re breaking down the details of the data storage revolution.

Data is not an outlier.  By now we all know that.  Data is an integral part of our lives, a looming constant that determines our decisions and grows as we create outcomes and outputs.  The outlier today is not data, but what we humans store all that data on.

What: The data storage revolution is the current and previously uncharted territory of data storage innovation, driven by the need to create more and more data centers to store that data we humans keep creating.

Who: IBM, Microsoft, SanDisk, Hewlett Packard Enterprise, Fritz Pfleumer, the University of ElectroCommunications, Intel, the University of Manchester, Arizona State University, the University of Washington, Stanford University, and MIT

Why: Generally, we as humans tend to be packrats.  In the digital age that hasn’t changed and may be even more pronounced.  Photos of vacations you took years ago that you haven’t looked at, well, in years, are taking up space in that cloud drive or directly on your device.  That space isn’t arbitrary, and whether you store on your device or in the cloud there needs to be hardware to back it up.  Acres upon acres of land are owned by the US government, Amazon, Microsoft, IBM, Google, and other larger data center providers and users.  The opportunity cost is potential farmland in a world where rural hunger and food deserts are a real thing.  Or potential housing when home and rental prices are skyrocketing.  Or potential natural space in a time when some kids think a baby carrot is actually what a carrot looks like. Innovation in data storage has massive implications on the physical space we humans take up on a large and influential scale.  Also, if there isn’t innovation in the space it means that we will have to learn to start changing our nature as data piles up and we can’t build any more data centers or virtualize any more machines, forcing ourselves to cleanse of the unnecessary information we tend to cling to.  Cloud prices, right now next to nothing, would continue to increase until only the affluent and the corporate can afford additional space.  And tell me, how will I load another video of my dog chasing her tail then?

That’s the thesis today folks; data storage is an outlier because it has the potential to make or break the influence wonderful emerging tech will have, because without someplace to store it all there will be no revolution.

There will be no use case section today because there is one way to use data storage: to store data.  If I’m wrong please feel free to educate me in the comments.  Instead today’s provided program will look as such:

  • The History
  • The Trigger
  • Implications
  • New technology
  • Ideal world in Amanda’s head


The History:

In the 1970s when microchips began the trend of getting a makeover every 18 months, storage methods were widely left alone.  The common mindset was that there would never be the need to store so much data that it warranted significant innovation.  Enter the Internet, essentially ubiquitous access to cloud storage, and the desire of every organization and private citizen for big data analytics.  Not only is the sheer amount of data larger than it has ever been (90% of the world’s data in 2016 had been created in the previous 2 years) and expected to reach 44 trillion gigabytes by 2020, there is also increased demand for edge storage.  Edge storage is often on IoT devices that are small and track automated processes and demand for analytics and storage directly on these devices will only continue to increase as blockchain across devices becomes more prevalent. So today we focus on the data storage revolution and how hardware can keep up with the Yotta- prefix.

Storage is based on magnetization; if a particle is printed on the storage device in one direction that particle is read as a 1 and if it is printed in the other direction that particle is read as a 0.  This writing of information has essentially been kept the same for the entirety of data storage history but gotten smaller along the way.  The first magnetic tape was patented in 1928 by Fritz Pfleumer.  This style of storage wasn’t actually used for data, though, until 1951 in the Mauchly-Eckert UNIVAC I.  Key to this type of storage is that it can be overwritten, allowing for old information to be purged and new added to it.  Especially for consumer devices, which often don’t hold as sensitive data as public or private organizations, the ability to rewrite allows for both cost and space savings.

This first magnetic tape was able to hold 128 bits of data per square inch, and the first recording of data was 1,200 feet long.  A recent breakthrough by IBM has brought the density of magnetic tape to 201 gigabits per square inch.  That means one inch of this new tape would have fit on 1.57 billion inches of Pfleumer’s tape, or 24,784 miles of tape and over 83,000 books of storage.  Now all of that is able to fit in a space shorter than your pinky finger and thinner than the width of your phone.

The Trigger: 

That may seem like a lot of storage, but think of the amount of data generated daily.  Every day we humans create 2.5 quintillion bytes of data, which when written out has 17 zeros and is converted to 2,500 petabytes.  Petabyte is a new one so you may not have heard it used yet, but it is one factor larger than terabyte.  The embedded image at the top of the page is a really handy reference chart if you would like to know all current 48 orders of magnitude.  Petabyte is the second largest.  Data rules over our lives these days.  What is captured by our daily activities has repercussions on the ads we see, the loans we qualify for, and even what careers we’re considered for.  Data is king, and data storage is both the army of and history written about that king.  This has proven to be a complex relationship as we continue to create more and more data.  Constantly adding servers doesn’t quite work because the more servers the more management there is required, and the more management required the greater likelihood of a crash or a bug derailing the entire system or worse accessing sensitive data held in that storage capacity.  Also, just adding servers creates more convoluted connections between all that hardware as they all try to communicate with one another, processing or querying data stored across an entire server farm.

There are a number of ways this issue is being combatted right now.  One is middle out storage, developed by HPE and illustrated in the development of The Machine.  This allows for all data being processed by given hardware in the network to be stored in a single location, decreasing space requirements and increasing the ability to query across an entire collection of data.  Another effort includes the molecular storage of data at cold temperatures, a technique still relying on magnetism but shrinking the space needed to house the same amount of data exponentially.  As the ability to store on the molecules edges toward the temperature of liquid nitrogen, a fairly inexpensive cooling system, the scalability and commercialization of this method becomes highly viable.


Plans for the world’s largest data center have been proposed by Kolos Group, a US-Norwegian partnership that also operated in fishery & aquaculture, oil & gas, and power & industry markets.  The large facility is planned to meld with the landscape through efficient design and much of the expected 1,000 kilowatts of power are expected to come from renewable energy.  This is just the latest and largest in a wide range of data center types, sizes, and locations.  As we continue to create data at greater and greater scales without purging of what we created yesterday or the day before our need for storage is going to continue to increase even as storage forms become denser.  That means more space being taken up by storage facilities and more power being used to run those facilities, causing a not insignificant impact on the environment.  In an age when housing prices are a struggle for even those well employed continuing to dedicate larger amounts of land to storage only exacerbates the problem.  Data storage facilities aren’t all bad though.  High skilled workers are required for each of these centers in areas spanning from admin and management to high tech data systems capabilities to cyber and physical security.  Not only that, but often these fields are underpopulated and so offer wide opportunity for those just starting their careers or those looking to shift careers.  With greater environmental efficacy and planning to optimize space, along with continued advancements in the tech we humans have the potential to comfortably live with our desire to hoard everything, even data.

New Tech:

In lieu of use cases today, we’re going to look at some of the new technology that is being researched and tested in the data storage sector.

Molecular Storage:

Blockchain and the desire for edge computing, in addition to the massive amounts of data being created daily, are spurring the need for smaller and cheaper data storage options.  E.g., if a product is being tracked through the supply chain with blockchain enabled RFIDs, the device needs to be small and cheap enough to span across hundreds of thousands of items of varying sizes while also being able to hold data for all the blocks in the chain associated with it.  In comes our first new tech, molecular storage.  While still in the research phase, molecular storage would enable high density information to be written on a single molecule, 100 times more dense than current technology.  The downfall of molecular storage is the need to keep these molecules cold. Very cold.  Recently a University of Manchester team working on this technology made a breakthrough in bringing the required temperature from -256 C to -213 C, a decrease of 53 degrees C.  However, -213 C is still tremendously cold (the equivalent of -351.4 F) and cold enough where there is not currently effective and inexpensive cooling technology to support it.  Continued research hopes to bring molecular storage down to a cooling temperature of -193 C, which is the temperature of liquid nitrogen.  This product is relatively cheap when speaking in terms of high tech cooling systems and would be a breakthrough in the commercialization of the technology.

DNA Storage:

As weird as it sounds, let’s start printing things onto biology.  I mean, we’re already printing biology, with some 3D printers able to print organs live for transplant.  So why not print directly onto the fabric of what makes us us.  That is what researchers, including the team at the University of Washington and Northwestern University’s Center for Synthetic Biology, plan on doing. Like molecular storage, DNA storage is 3D and therefore denser.  Unlike molecular storage, DNA storage is further along in the GTM process.  While still very much in the research and development phase, there have been some highly publicized and very interesting applications of the technology.  E.g., the team at Northwestern was able to print a movie onto DNA storage in April of this year.  The movie ‘Ride On, Annie!’ of a horse running was encoded into E. coli DNA.  This specific application was encoded into an actual living organism, the E. coli cell itself.  Both DNA storage and announcements by Arizona State University researchers of an RNA constructed biological computer have significant implications in the furthering of technological and biological crossroads.

3D storage/processor combo:

Bottleneck is a real thing.  Between two chips, like a storage chip and a processing chip, bottleneck creates data latency which at scale is a scalable problem.  That sounds like gibberish you say? Try watching your Excel model struggle processing a sheet full of sumifs formulas on 700,000 rows of data.  Do you know Word?  I Excel at it.  I’m going off on tangents.  Anyways, a joint effort by Stanford and MIT have produced a solution; a 3D chip that is both a processor and a storage mechanism.  The device uses nanotechnology with carbon nanotubes instead of a silicon based material and have developed the most complex nanoelectric system to date.  Layers of logic and storage are woven together to create a web of detailed and complicated connections.

Advances in tape storage:

Lastly I want to again touch on tape storage.  As I mentioned earlier, IBM has recently announced a revolutionary tape that holds 330 terabytes in a length about as long as my pinky (pinky size may vary).  Tape technology is the oldest form of storing data and has continued to evolve from the outset.  Expect to see more from this legacy tech in the future, it’s not going anywhere.

Idealized World: 

We humans create a lot of data.  Especially as the generations who grew up on screens start to overtake those who didn’t, data consumption and creation will continue to grow.  It has to go somewhere and someone has to pay for it.  I see a variety of these data models I spoke about today, along with a number of other emerging and legacy technologies, to optimize our space requirements.  I also imagine that we as a society will start learning how to parse down on the data we keep, understanding more accurately what will bring use and what will sit in the back of the closet collecting dust.  What I know for sure is that if we continue this upward spiral into the data dimension we will get lost in cyberspace and not realize the space around us has become exponentially more cluttered with hardware.

Thanks for listening today guys, this is everything I currently have to say about data storage.  As always, comment below for any request, recommendations, corrections, updates, and overall bashing.  Next week we dive into the murky waters of the distributed internet, so buckle up.  Til then go do something greater than average.



Episode 3: Quantum Computing

Subscribe in a reader

Transcript of Audio:

Hello and welcome to episode 3 of Three Deviations Out.  If you haven’t been following along, I’m Amanda and I love outliers.  I have a passionate belief that outliers of any kind are the sparks of this world, good or bad.  Like a bit of sriracha on literally almost anything, they make life a little spicier.  Last week we talked about blockchain and how it is so much more than a digital currency for drug traffickers and people with weird fetishes.  This week we’re going to talk about quantum computing; the tech, the players, the impact.  First, let’s start with the key points:

What: Quantum computing is the idea that using quantum bits, qubits, instead of regular binary bits a user can take advantage of unique attributes of electrons when they become entangled and are in superposition. It’s alright if you didn’t understand some or a number of the words in that last sentence, I didn’t either when I first started looking into this technology.  We’ll be going over the various quantum properties here in a minute.

Who: IBM, D-Wave, Google, Accenture, Atos, Rigetti Computing, NASA, The Republic of China, Russia, The University of New South Wales, and every researcher who has committed their lives to studies that further quantum theory including Albert Einstein, who first noted quantum properties of small particles, and Erwin Schrödinger, well known for his famous cat.

Why: Quantum computers have the potential to be highly influential in operational efficiency, chemical calculation, and machine learning.  To put that in perspective, those three areas cover nearly all aspects of computing outside of productivity applications and web browsing.  Not to say that quantum computing won’t influence the Internet.  You just won’t use a quantum processor browsing for kitchen supplies on Amazon.  Everything from climate change and world hunger to cancer research and the eradication of disease could be addressed through the power of quantum processors.  That is why I see quantum computing as an outlier.  This is a technology that could exponentially advance our society in ways that only a handful of major technological breakthroughs in recorded human history have.

Okay.  So, now that everyone knows exactly what quantum computing is and how to use it and how to code while taking superposition into account, let’s get into the complicated stuff.  Raise your hand if you have any questions.  None?  Great, let’s move on.  I will warn you that in the support there will be fewer use cases and less hard evidence than I was able to present with blockchain, simply because on the product development cycle quantum computing is an infant while blockchain is more of an adolescent.  Keeping that in mind I want to hit a few points:

  • How it works
  • The differences in hardware/GTM approaches
  • Potential implications
  • Use cases: Things we’ve seen so far
  • The weird stuff, because yes, it does get weirder
  • Idealized world in Amanda’s head

The first thing I’ll note is that in the great scheme of things, quantum processors are not an outlier.  Like, if we look at all of human history this is what we tend to do.  We develop tools that make our lives easier or enable us to do things faster.  The impact any of those tools has is its ability to scale and the amount of people it touches.  Last week I compared blockchain to the widespread adoption of the printing press.  Quantum is more similarly associated to the widespread adoption of agriculture.  Yeah sure, foraging for berries and hunting antelope with spears got the job done but isn’t it just so much easier to sit back feeding this cow hay until its big enough to eat?  Calculations that would have taken hundreds of years on a classical (binary) computer, even one that would be considered a supercomputer, would take days, hours, or even minutes with a quantum computer that has reached supremacy.  Supremacy is the ability of a quantum computer to outperform the highest functioning classical computer, thought to be reached at 50 qubits.  Google has pledged to have a 50-qubit processor by the end of 2017, and a research group led by Harvard professor Mikhail Lunkin announced in June they were the first to build at 51-qubit universal quantum processor.

How it Works:

Quantum entanglement is a property that occurs when the attributes of one particle gives information about the attributes of another particle.  This is important because the nature of quantum particles means that observing them implicitly changes them.  Using this theory, we can say that a particle both is and is not any given thing at any given time. This idea, that a particle could exist in multiple states at once, is superposition.  So, the exchange of the information about particle attributes is entanglement and the idea that these particles exist in multiple states at once is superposition.  Both of these states need to be achievable for a quantum computer to function, along with a number of other variables.  When these states are reached with a quantum processor, the hardware is able to do exponentially more calculations at once than a traditional computer with the equivalent number of bits due to the two properties we just discussed.  Consider it this way: A traditional bit thinks in binary which is either one or zero.  A qubit can consider either one or zero or one and zero all at the same time.  These properties are influential across a number of applications including cryptography, telecommunications, teleportation of photons and the decentralized internet, but today we will mostly stick to processors.

Differences in hardware/GTM approaches:

There are a number of quantum computing companies that span services, hardware, and software.  Today I will be featuring IBM, D-Wave, and Rigetti Computing because these are the companies I am familiar with.  If you are interested in other quantum computing companies and you think I should know about them please feel free to leave a comment.  If you are part of an effort working on some aspect of quantum computing and want to talk shop, I’d love to.  Because of the depth and overall newness of this topic I expect I will cover it more than once and would love to have guests next time.  IBM, D-Wave, and Rigetti are all in the quantum processor business but each approach both hardware and GTM in different ways.

D-Wave currently has a 2,000-qubit version of its annealing processor available for the low low price of $15M with an open source quantum language, Qbsolv, available on Github.  The company currently only offers an on-premises hardware option with cloud access available in certain instances, but not for public use.  An annealing computer is a type of quantum processor that solves a very finite set of problems.  Also, instead of harnessing the power of quantum mechanics it is instead just along for the ride.  Essentially it is like the difference in using a broken horse to plow a field or trying to hook a wild mustang up to the plow and make that work.

IBM is working with a slightly different GTM and hardware approach.  Big Blue currently has two versions of their universal quantum computer, a 16-qubit available for public use and a 17-qubit for only commercial use.  Both are available over the cloud and don’t worry, you don’t have to learn to code in quantum just yet.  The company has built an API that allows you to develop in Python directly on the quantum platform, and there is a growing community on Github using the curated tool kit.  However, if you did happen to be interested in learning a quantum language IBM may not be the place for you as their root language code has not yet been open sourced.  As far as I am aware IBM has not commercialized a hardware package, making their quantum processor only available on the cloud.

Finally, Rigetti Computing is like the cooler younger sibling of the quantum computing heavy hitters.  The company started only a few years ago, going through Y Combinator Startup School, and now resides in a warehouse in Berkeley CA.  Offering quantum computing through the cloud, Rigetti has both a Python based API and an open source quantum language called Quil for their beta program, Forest 1.0.  Users are able to build and simulate algorithms on 30 qubits along with running them on an actual quantum chip.  The company has developed 8 qubit chips as of August 2017 and are using a new two-qubit gate scheme making the abilities of the chip more scalable than previous iterations.  Rigetti is currently working with universal quantum chips, the most powerful type of quantum processor.

Potential implications:

I want to preface this section of the recording by saying that quantum computing is still in its infancy especially when compared to technology like microchip processors.  So, while potential implications are well researched, they are well researched only to the extent that a technology until the last few years existed only in peer reviewed papers can be well researched.  With that asterisk, I will say that the potential of quantum computing is large.  What we’ve seen so far is that the technology is able to more effectively calculate operational and chemical algorithms than classical computers.  Operational calculations span from traffic optimization to supply chain to law enforcement enablement algorithms.  Chemical calculations include determining the best way to influence climate change, how to best grow and distribute crops to influence global hunger, and genome sequencing that could influence any number of disease prevention.  This is because, according to Andrea Morello, “the number of operations it takes to get to a result is exponentially smaller”. That means that any large variable algorithm, any you can think of, can be made faster and more effective with quantum computers.  The concept becomes difficult because there is little testing and real-world application to the theory, but as you will discover in Amanda’s idealized world I think the potential is high.

Something quantum computers will be great for and have already proven the ability to do is break encryption.  The basis of most encryption is factoring using enough variables that classical computers could never brute force their way in.  Quantum computers however have very different calculation approaches through the superposition attribute.  Encryption of today would be and currently is no problem for a quantum computer.  In fact, quantum processors have already solved Shor’s algorithm, a factoring algorithm that cannot be solved by classical computers and was comparatively a breeze for a quantum processor.  What this means is that current encryption is useless with quantum processor availability.  And because there are quantum cloud offerings, barriers to entry of achieving encryption breaking techniques does not include the steep prices of hardware.  There are companies such as Post-Quantum and ISARA Corporation that are attempting to safe guard from potential quantum attacks.  So far there has yet to be a hack that is specifically attributed to quantum decryption but I only have public knowledge available to me.  If you know differently I would love to hear about it, please reach out through my contact page, comments, or the number of social accounts I have.  For now, the best approach I recommend is try your darnedest to go post-quantum as quickly as possible if you consider yourself a high-level target.  Otherwise, I don’t see wide scale quantum hacking in the ilk of WannaCry or other massive malware to happen soon.  For all our sakes let’s hope I’m right.

Another area where quantum computing has potential impact is the field of Artificial Intelligence.  Some humans may find this terrifying but personally I find it enthralling.  I’ll spend an entire episode on AI but I’ll take a minute now to cover some of the things that have been accomplished already before getting into how quantum will influence the space.  Most people know Watson, Siri, and Alexa.  These are all artificial intelligence programs that, I don’t know about you, but the humans I know talk about as if they are people themselves.  Instead of calling Siri ‘it’ she is referenced as a female.  There is the sentiment about wanting to ‘meet’ Watson, as if you could actually shake his hand.  Beyond the programs whose names we know, there are the robo-dials that are oh so common now, an artificial intelligence program meant to speak over the phone often as a cold call salesperson or customer support.  AI does more than just talk to us though.  There are new methods of sleep study that use AI to be more effective, along with medical treatment, handwriting and facial recognition, journalism, and creativity.  I know that sounds farfetched but just check out the piano piece in the link below built by an AI.  AI uses deep learning to understand the complex theory and neural networks designed to ‘think’ like the human brain.  Quantum has the potential to increase exponentially the amount of information an AI program can process at a time because its very nature is processing and understanding variables.  Being able to run more variables that can have more cross-relationships across the network will only increase the efficiency of machine learning, giving us humans more effective and efficient machines that can make decisions better than we can in almost innumerable fields.  We will see the first great novel written by an AI program, and it won’t be uncommon to see an AI program on a corporate or philanthropic board.  Many knew the AI revolution was coming, and in some ways, it has already snuck into our lives.  Quantum computing is just speeding up that integration, and in my opinion, that is just fantastic.

The last area I will touch on is operational efficiency which you already know from last week is a personal favorite of mine, mostly because I’m lazy and don’t want to exert any extra energy if I don’t have to.  Quantum computing has already shown a few uses in operational efficiency that reach beyond potential into the real world.  Operational algorithms are perfect for quantum computing because they start out with a lot of variables and try to define every conceivable relationship between any of those variables to create the most optimized process.  Because quantum computing is able to define relationships between variables by the qubits equaling 1, 0, and 1 or 0 at the same time, those variable relationships can be determined much quicker to allow for some significant real-life applications.  One of the use cases we will be getting into revolve around this idea of operational efficiency in real time, and allow for better use of a human’s resources to get a job done because there are fewer or more effective steps in the process than there were before.

Use cases:

So, on that note we will get into those use cases I talked about.  We’ll be covering two today:

  • Real time traffic route optimization
  • Unsupervised machine learning

D-Wave and Volkswagen teamed up to understand how quantum computing could influence traffic route optimization, a process with a high number of variables that change at a high rate. Using a dataset of 418 taxis en route to the Beijing airport, the team built an algorithm of 1,254 logical variables to represent the problem and optimized by running a hybrid classical/quantum solution.  Before the optimization was run there was a relatively small number of routes being taken with heavy congestion on nearly all.  Correcting for queue wait times, the researchers concluded that with a dedicated D-Wave quantum annealing processor the route optimization could be run in 22 seconds across 50 randomized routes to clear congestion for the 418 vehicles on their way to the airport.  The amount of time needed to complete the problem is expected to diminish as the number of qubits in a machine increase.  The group of researchers plans to continue their work in traffic optimization with quantum, as well as understanding other real-world applications of quantum processing technology, so keep your eyes peeled and I will try to keep you as up to date as possible.

In the first quarter of 2017, four researchers at Los Alamos University tested the influence of quantum annealing on machine learning.  The researchers tested their hypothesis, that matrix factorizations are more efficient with quantum chips than classical chips, on D-Wave hardware.  They tested their experiment both on the D-Wave quantum annealer as well as two different classical computers.  Each processor was attempting to process 10, 100, and 1,000 faces across a number of tests to develop a facial recognition program.  After all tests were run the team concluded that while the quantum processor was better at very fast solutions, the ability for it to actually reach those solutions were sporadic and while many times were very fast there was a large variability of time with some processing tests taking up to 10 minutes. The classical computers on the other hand were at times slower than the quantum processor but were more consistent in processing time across all tests.  The group concluded by saying that there was really no clear winner because while a modification to the calculation had the potential to make the classical computer quicker, they also noted the relative immaturity of quantum technology and that with work both on the algorithm used and the D-Wave hardware speed and consistency could be improved.

There you have it, two very technical use cases of quantum computing. Most at the moment are being reported in peer-reviewed papers, so sites like arxiv.org are great for browsing if you are interested in keeping up to date on new quantum use cases.  If you know of any other use cases that have been tested, please comment about them I would love to learn more.  In the meantime, I will try to keep you up to date on the latest articles both here and through my Twitter feed, @greaterthanxbar.

The weird stuff:

The concept of quantum computing is already kind of weird, with the idea that particles can communicate with each other and be in any number of places at the same time.  However, these qualities allow for much weirder things than just quantum computing.  Three such technologies include quantum teleportation, quantum communication, and the quantum Internet.

Quantum teleportation relies heavily on the attribute of entanglement.  Currently experiments and tests have been successful in teleporting photons of light across long distances.  A hang up with quantum teleportation is that quantum energy is easily disrupted and moving through the relatively dense and noisy air of the planet means a photon can only go so far before it’s distracted.  A research group associated with the Republic of China has recently tried to subvert this distance challenge by launching a satellite designed specifically for use in quantum mechanics and bouncing the entangled photons off of that.  This allowed the photons to travel 870 miles, decimating the previous record of just over 60 miles.  Granted, this test is just proof of concept and you won’t be making the daily commute Star Trek style anytime soon but the relatively quick growth in the technology is exciting nonetheless.

Quantum communication, while similar to teleportation in that it relies on entanglement properties and light photons, is a bit more approachable.  In fact, it is being implemented in Jinan, Beijing, and Shanghai as China strives to be a global leader in next gen tech.  The most prominent driver of quantum communication is the security benefits that come from the quantum attributes.  Because quantum states are fragile, if a communication is intercepted the state is interrupted and this change can be seen by any user with access, revealing the invader.  There are also technologies such as quantum key distribution that are considered post-quantum security measures.  So, while quantum communication operates on fiber like boring ol’ telecom of today, the implications it could have on secure communication in a quantum world could be far reaching.

The last weird thing we’ll talk about, besides what goes on in my brain, is quantum internet.  I’m not going to spend a ton pf time here because, like AI, in a few weeks I’ll be spending an entire episode on how this and a number of other technologies can join forces to build a new and better Internet.  You heard that right, this isn’t just something that happens on hilarious tech sitcoms.  For now I’ll keep it short and simple.  Right now, the internet uses radio waves to send information around the world.  A quantum internet would use quantum signals through a network of entangled particles.  This has implications in the speed of quantum processing over the cloud and increasing the security of sensitive data.  You won’t be using it for everything though, like listening to this podcast or checking out Trump’s latest social media blunder.  Estimates state a global network will be functional by 2030.

My Idealized World:

There are still a lot of things we don’t know about how quantum computing could change our lives and disrupt problems written off as unanswerable.  There are few applications written for quantum computers and even fewer programmers who develop directly in a quantum language.  Universal quantum processors have only reached supremacy with a few research groups so far, though I expect more will follow soon.  To me, the things I’ve seen accomplished in testing and what I know about the underlying hardware and theory is enough to convince me of the potential merits.  As I see it, quantum processors will begin uncovering answers to both every day and global problems.  Not too long from now you may have an app on your phone being fed traffic advice translated from a quantum computer.  Or you may find that the weather forecasts you see are both more accurate and more detailed.  A drug commercial may come on the television and you note the striking lack of side effects because chemical compositions will be able to be synthesized more effectively.  Changes will at first be small, because often those problems will be the easiest to solve and implement.  Not long after that though you’ll see both private companies and public institutions using the technology to unlock the secrets of everything from how best to solve world hunger to the most effective strategies of war.

That’s all for today folks.  I hope it was informative and enjoyable.  Leave any comments, questions, concerns, or corrections in the section at the bottom of the page.  Next week we talk about millennials and their relationship with technology.  Disclaimer: I’m one of them.

Have a greater than average week!