Episode 8: Planes, Trains & Automobiles

Autonomous Vehicles are Our Future

Subscribe in a reader

Transcript of Audio:

Hello and welcome back to Three Deviations Out!  After a short hiatus I am back!  A few weeks ago, we talked about how automation and AI will disrupt our education system and what needs to be done about it.  This week we’ll focus on one of the industries that will almost definitely be upended by technological integration: transportation.  Before we discuss planes trains and automobiles, a bit of news.  Three Deviations Out is now officially available in the Google Play store! Go download and subscribe to be in the know on all things outlier using the button below. Also, don’t forget to follow me on Twitter @greaterthanxbar and connect with me on LinkedIn.  Finally, if you have a cool, society changing tech you think I should cover or you want to be on the show please don’t hesitate to reach out.  I would love to hear your ideas.

Autonomous everything seems to be a goal this year, and with partnerships and POCs galore the tech may accomplish that goal.  From as small as self-directed drones to as big as shipping tankers, automation would be a business win for the transportation and logistics industry, decreasing cost through personnel and error while being able to elevate humans to higher value tasks.  What those tasks will be, who knows.  But there will be tasks.

What: Self-driving everything, including autonomous consumer vehicles, shipping containers, steam liners, and personal helicopters.  Anything that gets you or your product from point A to point B without a human at the helm.

Who: Tesla, Google, Apple, Intel, NVIDIA, Ford, Toyota, Volvo, Mercedes Benz, Uber, Lyft, MIT, US Government, Auburn University, TARDEC, the National Highway Traffic Safety Administration

Why: The average American spends almost 300 hours in a car each year.  In 2016, nearly 40,000 people in the US died in traffic related deaths.  Ninety-four percent of those and all other vehicle accidents considered serious were due to human error in decision making.  Just by influencing traffic accidents automated vehicles can already make our lives better.  But consumer automobiles is just the tip of the iceberg in terms of transportation and logistics.  From the heavy load automated trains in Australia to the automated flying taxis in Dubai, self-driving everything is dragging us humans into the 21st century those 1900-ers thought we would be having.

And that’s today’s thesis: Automated transportation of both humans and other cargo including autonomous semis, mini vans, cabs, helicopters, and oceanic vessels will rock one of the largest industries in the world changing both our socioeconomic and physical landscape.

Today we will cover:

  • The Industry
  • The Tech
  • The Partners
  • The Laws
  • The Use Cases
  • The Idealized World in Amanda’s Head

The Industry

 According to SelectUSA, a government effort to create business partnerships, the shipping and logistics industry accounted for 8% of United States GDP in 2015 and accounted for $1.48T in spending.  The American Trucking Associations projects that total land cargo shipments will amount to 15.2B tons of freight in 2017 and by 2028 that is expected to increase to 20.7B.  With the surge of ecommerce shipping is becoming particularly important.  Consumers expect goods quick with no hassle.  Last mile shipping is becoming of particular import, shown by Amazon’s recent acquisition of Whole Foods and WalMart’s even more recent acquisition of Parcel.  With blockchain partnerships allowing for more consistent and trackable supply chains overall, creating efficiencies through autonomy is just the next logical technical step.

As for the consumer auto market, 2016 marked the seventh straight year of year-over-year sales increases and total sales came in at 17.55M in the US. This is in a climate where the number of carless households is increasing and rideshare companies including Uber, Lyft, and Zipcar are gaining significant traction in large cities.  So not only are cars per household increasing, there is a gap in the market that needs to be filled for those who still have to get places but have no personal means of transportation.  Public transportation through autonomous vehicles will see a boost in efficiency and revenue generation as drivers are replaced and accidents avoided.  As seen in Taipei the initial cost of an autonomous vehicle will be considerably higher than traditional vehicles but total cost of ownership is expected to be substantially lower and as both manufacturing and transportation infrastructure catch up to the technology cost will go down as it does with all products on the maturity curve.

The Tech

Before I get into the technology of autonomous vehicles I will say up until the point I started researching I knew next to nothing about the technology of autonomous vehicles.  I saved writing this bit until last because I really had no idea what I was doing and was tremendously overwhelmed.  So, if you know more than I do about this, and that’s got to be a lot of people, feel free to educate me in the comments below or reach out directly via my contact page, Twitter, or LinkedIn.  Okay, let’s jump in.

In 2004 DARPA, the Defense Advanced Research Projects Agency, began a number of challenges that kickstarted the autonomy movement of today.  Since then it’s been off to the races for autonomous tech spanning cars, tanks, semis, ships, trains, and planes.  Actual automation for vehicles dates back to well before then but the current era of technology is far and beyond the previous enough that it is almost a different species.  Autonomous vehicles use a combination of ultrasonic, radar, image, and Lidar sensors to see and react to what is going on around them.

  • Ultrasonic Sensors: Using sound waves, these sensors determine how far the vehicle is from another object
  • Image Sensors: Cameras on the vehicle work as its eyes to see the surroundings and some are even capable of 3D vision.
  • Radar Sensors: These work as another type of range detection, with waves responding to surrounding objects and reporting back.
  • Lidar Sensors: Lidar is able to map out the world around the vehicle using a low intensity laser.

In addition to sensors, autonomous vehicles require processors and software to interpret all that collected data.  While there are a number of software option available, often through a proprietary partnership with a technology firm like Intel, Microsoft, or Google, the go-to processors are built by NVIDIA.  Also a hardware player in the cryptocurrency game, NVIDIA has brought some specific contributions to the driverless market.  Because these sensors embedded in the vehicle are collecting so much data there needs to be a lot of processing power to analyze all of it quickly.  Unfortunately often what that means is having enough hardware to build another vehicle altogether, on top of the parts it actually needs to move.  The new Drive PX Pegasus turns that concept on its head.  The processor is about the size of a license plate and has the capabilities to support a level 5 autonomous vehicle.  We’ll get into exactly what that means in a minute but essentially, it’s a truly autonomous car.

The last piece of tech I want to touch on is the Internet and how that ubiquitous tool plays into this evolving industry.  Autonomous vehicles are connected to the Internet, which means they are connected to each other.  So in addition to being able to sense where things are around it, the vehicle is able to access data put out by other vehicles.  The more autonomous cars on the road or ships in the sea or plains in the air, the more efficient they’ll get because they’ll all be sharing data to make one another better.

The Partnerships

Lyft and Ford, Ford and Autonomic, Google’s Waymo and the National Safety Council, Uber and Tesla and endless more: the autonomous driving sector is ripe with high profile partnerships that make friends of would-be competitors.  As is the theory with other emerging tech trends and the open source movement in general, many heads are better than one.  This is especially true in a new sector that is merging two industries that have slowly been colliding for years.  Transportation has steadily become more technological and for years there has been software written directly into consumer and enterprise vehicles alike.  Autonomous transportation just ramps that trend up tenfold, requiring software, hardware, and mechanical giants to step up and work together in rolling these machines out across the globe.  As technology becomes more engrained, though, and deployment moves past the R&D phases expect consolidation with various partners purchasing departments of another partner’s firm to ensure consistency of vision going forward.  Imagine it as a more civil version of what Uber did to Waymo.  As is the way of business.

The Laws

21 states currently have enacted legislation in relation to autonomous vehicles and another five have executive orders regarding the emerging tech.  These laws are a hodge-podge of regulation about permits, testing locations, and the ability to take over for the vehicle.  As of last week, however, a federal bill has been passed by the Senate Commerce Committee that would allow self-driving car manufacturers to sell a specific amount of autonomous cars each of the next three years, given the vehicles meet standards set by the National Highway Traffic Safety Administration (NHTSA).  Now the bill waits on the Senate floor for approval while the House has already passed their version of the bill.  Currently the NHTSA defines 6 levels of automation:

  1. No automation whatsoever: This is that first car you got in college that you thought was so cool but actually it had roll windows and a tape deck.
  2. Driver assistance: This includes features such as brake assistance, backup cameras, and notifications when you get too close to the curb.
  3. Partial automation: Here cool features like lane assist and auto-parking start to work their way into your car.  I myself have not had the pleasure of using one of these.
  4. Conditional automation: This is the style of what we would call ‘self-driving’ car that we see on the road today. While you the driver still need to be around, you’re not actually doing much.
  5. High Automation: The only time you or I have to take over in a level 4 car is during situations like a good ol’ NorEaster or a full-on Hurricane Harvey.
  6. Full Automation: As we’ll see in a minute, this is the goal of the Mercedes Benz pod. These cars won’t even have brake pedals or steering wheels because they got this.

As we get closer to level four and five automation there will continue to be laws enacted around the technology, especially because it is so engrained in both our physical infrastructure and the revenue generating machine that is our government.  From the looks of things so far, generally government is on the side of automation and laws are making it easier, not harder, for manufacturers to create safer driving conditions.  I would like to see that trend continue but we’ll have to wait and see what happens.

The Use Cases

There are so many possibilities for applications of autonomous machines that I couldn’t possibly get into them all today.  Instead I’ve chosen to take one from planes, one from boats, and a few from automobiles.  That will give us a few more use cases that usual, but if you haven’t seen how gung-ho companies are about getting automated take a look through YouTube and you’ll see just how many option there are out in the world.

Waymo: While Uber was the first to bring automated rideshare to the public, Waymo is the first to bring a truly successful consumer product to testing and use in the real world.  On April 24, 2017 Waymo CEO published an article on Medium describing the company’s Early Rider program in Phoenix AZ.  This allows the public to apply to be some of the first consumers to test out Waymo’s autonomous vehicles.  Waymo started out as a moonshot project in Alphabet’s X subsidiary and became an independent company in 2009.  Self-driving testing has been going on ever since.  The fleet of cars by Waymo, Chrysler minivans driven by Intel software, has driven in excess of 3 million miles and eventually will reach level 4 or 5 autonomy which means no human intervention will ever be necessary.  Though this may still be a few years out, the prospect of a functioning fully autonomous fleet of taxis waiting at the beck and call of my smartphone is making me drool a little bit.

TARDEC:  TARDEC and Auburn University recently test drove a platoon of vehicles across the Canadian border and back.  This wasn’t just any platoon though, these were linked semi- to fully-autonomous vehicles that ranged in size from passenger vehicles to vehicle carriers.  In a bold move coming from a military unit, the partners recorded the entire trip, both from inside and outside the vehicles, and posted an edited version on YouTube.  Automated vehicles would go far in creating safer situations for soldiers both in cargo and mission situations while enabling more diverse strategies in varied terrain.  This is the first time the military has shown this technology to the public, and generally the assumption can be made that military technology is further along than what we consumers see in the commercial world, so I expect that what we see in this video is only the tip on the iceberg.

Mercedes Benz: Mercedes Bens has always been the height of consumer auto luxury.  That reputation is expected to extend to the autonomous vehicle market.  At the 2017 Consumer Electronics Show the company unveiled the F 015 Luxury in Motion concept car, one that takes advantage of level 5 automation. The car features swivel chairs so during full autonomy passengers can face each other for card games and chit chat, classy wood flooring, 360 degree screens for a VR experience, and saloon-style doors.  Unlike the Chryslers in Waymo’s fleet, these Mercedes are more likely to be membership or coop vehicles where ownership is shared by a few individuals who can call on it as needed. Let’s just hope Becky and the fam aren’t hogging it for soccer practice and piano lessons 5 days a week.

HÆglund: HÆglund is a marine automation specialist operating out of Norway.  The company has outfitted numerous shipping tankers across the globe and a new project is working with Ektank and Utkilen to integrate linked automation systems into a fleet of 8 tankers.  This system will allow the ships to all ‘talk’ to one another, give greater insight through efficient data collection, and allow HÆglund to maintain 90% of updates and repairs remotely.  Ektank and Utkilen see these efforts as long-term cost saving, with greater efficiency and overall lifespan than those ships which are not integrated.

Passenger Drone: Dubai has made it a goal to be a fully integrated smart city in hopes to make its citizen’s lives happier and one step towards that goal is to build out a system of flying taxis to bring you anywhere you desire in an autonomous drone.  The city has partnered with a number of companies including Volocopter and EHANG to bring a new type of transportation to Dubai.  While the idea of flying people around in drones at the commercial level is fairly new, drone technology in its simplest form has been around since 1916 and is much further on the tech maturity curve than other emerging technologies seen being pursued in the market today.  The country’s smart city objectives are led by Dr. Aisha Butti Bin Bishr and aim to make the city the happiest on the planet.  Heck, you could be taking a flying drone taxi in the next five year – that makes me pretty happy!

The Idealized World

The biggest changes that autonomous vehicles will have on our daily lives is in the amount of time we have and the space we see around us.  Fully autonomous driving for all vehicles on the road means that road signs, traffic lights, and even lines on the road become irrelevant and will eventually become nonexistent.  There will no longer be the stresses of driving and road rage will be a thing of the past, we’ll all have to get our aggression out somewhere else.  And the time we will have.  I don’t know about you, but right now I spend 25 minutes driving to work each day for a total 50 minutes round trip.  If I could spend 50 minutes a day doing something else, like reading a book or learning French or catching up on some emails, my days would become a plethora of more productive.  Or maybe I just spend that extra 50 minutes taking a cat nap and be a little more pleasant in the office.  Either way, at the pure auto consumer level this means big changes.

Not only will I have more time, but the car I am in most likely won’t be mine.  Maybe I have a little extra cash and I’ve joined a car club, so me and 50 of my closest friends all subscribe to a certain number of different types of cars.  On my way to work I order a commuter car that may include a few other members who work close to my building.  Next week, though, I have to take a business trip a few hours away and for that I’ll order a sleeper car from the membership club that will show up at my house with a nice cozy bed inside of it and instead of killing myself driving through the night while trying to prep for that presentation in the morning I get a few hours of sleep and still have plenty of time to wow the audience.  Or maybe I don’t have a membership club so instead of paying an annual fee I order cars up like cabs and pay by the trip.  There are a lot of options that can start to come into play, but car ownership is out the door, unless you’re an enthusiast.

The final thing I’ll touch on is how much this will change industry and what that means for us consumers.  Mostly, it means things will be cheaper.  Error free and driver free shipping means dramatically lower cost of products, meaning lower prices and higher profit margins.  There will be a dramatic transition period here where we may see an entire industry go jobless or we will see a large union lobbying campaign combating the use of level 5 automation in shipping and transportation.  My bet is that eventually things will sway towards the autonomous because business will continually look toward the bottom line in a free market and this industry has a lot of cash flow to break down any legal barriers.

Thanks for joining me for another episode of Three Deviations Out! I hope you enjoyed it as much as I did.  Next week we cover AI, robots, and the end of humanity as we know it.  It’s going to be a gas!  In the meantime, catch up on old episodes of the podcast, follow me on Twitter, and connect with me on LinkedIn.  Goodnight and Goodbye!

Amanda

Listen on Google Play Music

References:

Episode 7: Education Cleavage


Subscribe in a reader

Transcript of Audio:

Welcome back to Three Deviations Out!  Last week we talked about the distributed internet and how to cut the ultimate cord.  This week we’ll cover the very real and distinct split in our education system.  I will warn you this week was pretty hectic so the episode is short, because education is already a massive topic I will be revisiting at a later date, don’t worry.

Now, before I get all riled up, let’s cover the basics:

What: The failure and subsequent splintering of education ideologies across the country as we all try to figure out how to break away from our antiquated system that will ruin us in only a few years.

Who: United States Department of Education, Common Core Advocates, Charter School Advocates, Teachers’ Unions, Private Schools, Non-Profit Education Initiatives, the Montessori Schools, F-infotech, Eduventures, Codeacademy, Duolingo, Lumosity, IBM, P-TECH,

Why:  In just a few years there will come a cleavage point in our society where the majority will not be able to get jobs with the skills they learned in high school or college, and those already in the workforce will have their jobs automated.  The choice will be to either upskill or give up, fall back on the state.  Unfortunately, at the scale expected our country will not be able to support the number of individuals whose skills will not fit into the new economy.  When that happens, we need to have a fallback plan to ensure the ability to keep going, to keep innovating.

83.3% of full time, first time postsecondary students received some sort of federal financial aid in the 2014 – 2015 school year, up from 75% ten years earlier. Overall, 38.3% of undergraduate students received financial aid in 2014 – 2015 with the average loan of $6,831 annually resulting in $27,324 of total debt in the expected 4-year graduation.  In the graduation year of 2009, though, only 53.8% of students nationally graduated their 4-year program within 6 years of the start date. As for high school, over the course of 4 years an average of 3.6% of students drop out nationally, with the majority (4.8% overall) dropping out in their senior year. In 2015 the United States fell just on the average of OECD countries in reading and science, and was lower than the average in mathematics.  As a country we are also investing less over time in primary and tertiary education as other OECD countries spend more, with steady declines in investment rates since 2009 though since then the economy has only grown.  Our country’s teachers are overworked and underpaid when compared to others and have minimal time to lesson plan and give feedback to students because they are consistently in the front of the classroom, teaching.  That’d be like expecting me to have a six-hour blog post ready to go each day, all prepped and researched by myself.  I could maybe do it for a week.  If that.

So, what we get out of that ramble is what many of us already know.  Our country’s education system is broken.  It’s been broken for a number of years.  However, with the technological changes making their way to the mainstream right now and a major socioeconomic shift in how we perceive what a ‘job’ is just over the horizon, something’s going to break.  Estimates suggest that 38% of Americans could lose their job to automation over the next 15 years with the biggest hits suffered in routine tasks such as manufacturing, administration, transportation, and logistics.  Using August’s job numbers, that is the equivalent of nearly 60,000 people across the country losing their jobs with no chance of receiving another position in their field.  That’s a population the size of Flagstaff, Arizona.

Today’s theory: A technological revolution will upend the socioeconomic system we’ve been contentedly dealing with for the past century or so, and we are not prepared for that.

What we’ll cover today:

  • Informal Education, including Public/Private Partnerships and Technological Reform
  • Use Cases
  • Things You Should Absolutely Know
  • The Idealized World in Amanda’s Head

Informal Education:

Education has increasingly become an intrinsically motivated institution.  From Massive Open Online Courses (MOOCs) to online universities, high skill apprenticeships, technology incubators, and education apps, online or informal education is a growing trend.  Because of easy access, with cost and other barriers to entry removed many are understanding that a traditional formal college education is not necessarily providing the skills a person needs to succeed in the workplace or in society after graduation.

Use Cases:

As a mature and widespread market, education has a wide variety of use cases I could choose from to talk about here.  Because of that, I’ve chosen just one to focus on for each of the categories I covered just a minute ago.  The real-world examples I will be focusing on are: BASIS Scottsdale, Onondaga Community College, P-TECH and IBM, and Coursera.

BASIS Scottsdale:

The number one charter school in the country started in 2003 with 138 students.  The goal of the school is to merge STEM ideals with a liberal arts education, idealizing the merging of the fuzzy and the techie.  While focusing on traditional STEM and liberal arts subjects, there is a high emphasis on time management and organization.  For 5 years the organization has been listed in the Washington Post’s ‘Top Performing Schools with Elite Students’.  The option to go charter is a difficult one that all families have to make at an individual level.  A student may have more availability to resources and be more challenged than in a traditional public-school setting, but in turn may also have a more focused academic path chosen for them instead of being able to choose it themselves.  This school has proven its clout though and this and the other BASIS charter schools are making this style of education very enticing.

Community College

Community colleges are on the rise.  42% of undergraduates were enrolled in a community college in fall 2014.  Compared to the lofty price of a public or private four-year college, community college tuition ranges from just under $1,500 (California) to just over $7,500 (Vermont).  And depending on which state you live in, a community college will give you the same quality of education and access to resources as a full-time university while offering more flexibility and less expense.  Many community colleges offer partnerships with full-time universities to ensure credits transfer if you decide to move from the two-year program to the four year program, and business partnership allow these institutions access to internships and apprenticeships that may not be interested in those working toward a full-time degree.

P-TECH & IBM:

There are 56 P-TECH schools in the country.  P-TECH stands for Pathways in Technology Early College High School, and these schools span from grades 9 to 14, instead of the traditional 9 – 12 model.  IBM has developed these schools to allow students an opportunity for a no cost associate’s degree in a subject area that has been deemed business necessary.  The focus is in STEM but also incorporates other necessary workplace skills like communication and business analysis.  Students are focused on a path toward a career from day one of 9th grade and can easily see the path to graduation with an accredited degree, removing the stressors of college application and tuition.  This public/private partnership allows for a different path than the traditional high school, and is offering an interesting use case study on what tailoring education to business needs actually accomplishes.

Coursera

Coursera is a great meeting of the altruistic and the capitalistic minded.  The site does a great service; it provides high quality education through lecture, tutorials, class discussions, and interactive exams all available online.  Mostly these courses are free, or some aspect of them is free.  But in instances where someone wants to have a bit more recognition for what they’ve done there is the ability to ‘purchase’ a class and receive a certificate at the end indicating they’ve obtained that skill.  This allows for the more traditional university feel, with a type of degree received at the end, with a much lower price and more flexible hours than available traditionally.  Coursera has offerings spanning from deep learning to linguistics to art history and software development.  There are tutors available, study groups within each course, and access to top tier academic scholars.  Right now Coursera partners with 145 post-secondary institutions, has 25 million active users, over 2,000 courses, and four full-length degrees.

Things You Should Absolutely Know:

It’s hard to guess what skills we humans will need to have in the next decade or so.  With technology moving as quickly as it is I and we can make a lot of assumptions but really, it’s all up in the air until it actually happens.  There are a few things that are always key when interacting with one another and moving our society forward.  First of all, the ability to simply interact is key.  Communicating effectively and being able to effectively understand others when they’re trying to communicate with you is a skill that will never go away, and will become more important as many of us work in more collaborative settings today than we would have 10 years ago.  Another key skill will be the ability to access and assess our emotions in a positive way.  This goes along with communication but also ties in creativity and out of the box thinking.  By access and assess our emotions I don’t mean get all touchy feely all the time.  What I mean is the ability to be in a situation, check yourself, and make sure your reaction is appropriate to what is going on around you.  For example, if you are giving an important and widely watched speech, don’t allow yourself to get worked up into an angry fit and call people names.  Again, as we get more collaborative in our work spaces and communities, situational awareness and empathy are key skills.  Lastly, everyone should know how to use a computer.  I’m not saying you need to be able to code in C++ or hack into classified documents.  Just know the basics, and be willing to learn more as you do more in your online life.  Maybe take a Coursera on web development or html, or do some reading on the history of the semiconductor.  Tech is pervasive now, there’s no escaping it.  So know something about it.

The Idealized World:

There’s a realization we have to face as a society, and we have to do it quickly.  That is, we don’t know what skills will be needed in the job market in 10 years.  The majority of us are just making wild guesses while a few very impressive humans are making more educated predictions.  Especially with robotics and AI automation potentially able to more successfully complete a lot of tasks that make up entire current industries, we need to approach education differently.  Instead of the outcome of education being a specific job we should consider the types of traits and skills we want to see in our overall society, both professionally and socially.  We need to consider what it means to be uniquely human, and what kind of humans we want to mean.  And this especially means that education, either through formal or informal channels, does not stop at adolescence.  Learning new skills and being introduced to new ideas as a lifelong endeavor needs to become the norm, not the exception.  As we move to a less ‘job focused’ socioeconomic system we need to change our habits and preconceptions or risk falling into the proverbial pit.

Thanks for joining me for another episode of Three Deviations Out.  I hope you enjoyed it.  Leave comments, concerns, questions and arguments below, and follow me on Twitter @greaterthanxbar.  Next week we have a special treat, Jen Hamel will be joining us as Three Deviations Out’s first guest!  Follow her on Twitter @jenhameltbr and join us next week to talk about Artificial Intelligence.  The Robots are Here!

 

References:

Episode 6: Distributed Internet

Subscribe in a reader

Transcript of Audio:

Hello Deviators! Welcome back to Three Deviations Out! Last week we explored the wonderful world of data storage, including cool innovations like DNA and other 3D storages.  This week we’re delving into the murky waters of the distributed internet, a topic that until recently has been confined to peer reviewed papers and TV comedies.  Essentially it involves taking this distributed network of cables running across the globe connecting us all and instead of relying on the cables, it relies on the invisible, wireless connections our devices make with one another through wireless transmissions.  Before we get into that, let’s cover the basics:

What: The distributed Internet, essentially an Internet 2.0 that takes advantage of blockchain, IoT, and other emerging technologies and has the potential to solve a number of issues with the current internet structure.

Who: Ethereum, Rightmesh, Georgia Tech, Australian National Laboratory, Maidsafe, Golem, Raffael Kemenczy, FireChat

Why: The Internet was a game changer.  That’s fairly universally accepted.  Movement of information, knowledge, communication, and process from manual and hard copies to online has allowed greater access, transparency, and connection.  The internet took what was great about computers and allowed them to reach current maximum potential by connecting them, and us, all together.  Distributed internet takes that same idea and expands it, breaking down barriers to entry and allowing for an even more democratized and transparent system.  What we have now is great, but it could be better.  Over the last few decades we’ve had the chance to see in real time what large scale change has happened to our societies across the globe because of the internet.  Expanding that even further to the remaining half of the unplugged population, and fixing some of the bugs so the systems works in the favor of the many instead of in the favor of the few has the potential to be revolutionary.

So, that’s the theory of the day.  Distributed Internet is an outlier because it takes how we use the internet today, already a revolutionary tool, and injects it with democratizing steroids.  Theoretically.  Let’s explore! Today we’re covering

The Internet now

Problems

Solutions

Use Cases

Idealized World

 

The Internet Now:

In 1962 Paul Baran published ‘On Distributed Communications Networks’ with the RAND corporation and thus the Internet was born.  Kind of.  Like all great tech this now ubiquitous aspect of our lives started with a research paper.  The Internet as we know it now is comprised of a complex system of fiber and coax lines that connects you to the billions of other online humans.  As of today (Tuesday Sept 12, 2017) there are 3.726 billion global Internet users, 1.252 billion websites, and 1.992 billion Facebook users.  Just today, 117.4 billion emails have been sent, 2.69 billion Google searches completed, 2.524 million blog posts written, 334 million tweets sent, 3.2 billion videos watched, and 44,782 websites hacked.  Total global internet use for the day reaches over 2.19 billion gigabytes, and it’s barely afternoon.  Jeepers that’s a lot of data!  Since the first website went live close to 30 years ago on August 6, 1991, the Internet has become entrenched in half the world’s daily lives.  It brings us goods through ecommerce, services like news, legal advice, health advice, any advice, and outlets through forums like Facebook, Reddit, Twitter, Quora, and others.  It has brought new prosperity to community support with easy to use peer to peer networks and crowdsourcing.  Most of all, the Internet has brought about the democratization of knowledge, like a more digestible, more robust, easier to access library.

Problems: 

As great as the internet is now, it could always be better.  There are some serious problems with today’s global knowledge center, the most glaring being that only about half of the population is an online user.  This stems largely from lack of infrastructure and while not difficult to see the cause, the solution to such a problem is less easily identifiable.  A few less straightforward problems we’ll cover today are content attribution, corporate consolidation, regulation, and anonymization.  I chose these issues because I consider them to be the most problematic of problems with the internet.  Comment below if you have thoughts on other issues or believe the internet is perfect as is and there’s no reason to change it.  I’d love to hear your opinion.

Content Attribution:

Content attribution is the idea that a content creator receives credit for the content they produce.  Often what is deemed credit is flexible so here I am defining credit as ownership of content and any repercussions thereof.  So, if there is some copyright shenanigans going on with a piece of content, it’s on the creator.  But also, if content is shared globally and goes viral that is sourced to the creator and they reap the benefits.  As it stands now most online media sites including Facebook, Twitter, Pinterest, YouTube, Reddit, Quora, and LinkedIn obtain a ‘worldwide non-exclusive royalty free license (with the right to sublicense) to use, copy, reproduce, process, adapt, modify, publish, transmit, display and distribute such Content in any and all media or distribution methods (now known or later developed).’  That’s directly from Twitter’s terms of service, but the others are extremely similar.  Of those I looked at only Medium ensured the user that content would remain the creators and any distribution would be passed by the creator before completed.  As it stands the majority of content available on the internet doesn’t necessarily belong to a very large company, but that company has free, worldwide access to distribute or destroy it.  This can lead to controversial scenarios such as the destruction of YouTube videos showing violence, which happens to also destroy the very detailed video history created by the people of Syria of the war ravaging their country. It also leads to the need to own a piece of property on the internet before you content is actually yours.  While that property cost may be comparably small to physical property, the barrier to entry still exists. This is most damning in areas where poverty is high and the need to be heard is rampant, yet just being heard gets those creators little compared to if the content they were building was theirs.

 Twitter Content Terms Facebook Content Terms Facebook Content TermsInstagram Content Terms Pinterest Content Terms Reddit Content Terms Medium Content Terms LinkedIn Content Terms

Corporate Consolidation:

The internet was meant to be a global democratization of knowledge accessible by anyone.  I don’t know about you though, but I have one internet and cable provider available to me in the region that I live in, Comcast.  Where I last lived there was also one, Time Warner Cable now Spectrum, but Verizon Fios was working its way around the town and it was one of the most exciting things to happen in years.  Okay, so it was a small town and not much happened.  But having two internet providers is like seeing a unicorn and we were ready to embrace the magical creature even if it would take years for implementation.  Currently I am fascinated by this great map that shows the number of broadband providers in a given area in the US (the link is below).  There is a maximum of 13 providers in any given area across the country, but the number of areas with three or more providers is dramatically lower than the areas with only two providers.  And when you filter for areas with only one internet and cable provider, almost the entire country lights up.

Consolidation is not only happening with providers.  There is massive consolidation and silos in the actual use of the internet.  What browser do you use?  I bet I could guess.  At most I would need four guesses; Internet Explorer, Google Chrome, Microsoft Edge, or Firefox.  There are others but the majority of internet users take advantage of one of those four.  As far as search engines go, you probably use Google, but you may use Bing.  Unless you use TOR, and then that’s a whole other story.  But in general, there is heavy consolidation with both the delivery and use of the internet, leaving those in power with the ability to make the rules up as they go and ignore us plebians just trying to effectively use this truly amazing tool.

Regulation:

These corporations that have become highly consolidated are also highly unregulated and have been that way since inception.  Because of this, providers have been able to drive prices up for mediocre services and those which were able to get into the business at the beginning have enough control over those who would write regulation that it isn’t going to happen without major lobbyist reform.  This lack of regulation just creates more lack of regulation as these companies consolidate (e.g. the Charter acquisition of Time Warner Cable last year) as power consolidates with the companies.  The only leg that we consumers have to stand on is anti-trust (e.g. the attempt of Comcast to acquire Time Warner Cable two years ago).  Without effective regulation these companies operate essentially as monopolies in the territories they have sliced up and come to agreements on with the other providers.

Anonymization:

The internet has given access to free speech for all, an extraordinary feat and a great addition to humankind.  Unfortunately, this free speech has come on the heels of anonymization, and that has led to a growing divide across ideologies as individuals and rhetoric online becomes more and more charged.  Being able to hide behind a screen and a username brings people a confidence they don’t have in the physical world; confidence to say something they may not have, to back an opinion they may not believe in, or to attack someone they may otherwise be polite to.

Solutions:

In this ever-quickening age of emerging technology there seem to be solutions promising to address every type of problem the world has to throw at us humans except for maybe us humans.  Oh wait, we have robots for that.  We’ll get into that in a few weeks.  But what this means is since the inception of the internet there have been some pretty tremendous breakthroughs in technology that have the potential to fix the problems we just spoke of.  I say ‘have the potential’ because you never really know if someone can swim until you throw them in the water.  Until we have analytical proof though, what I’ve seen of these technologies is enough to convince me that there can be a true concerted effort to right the wrongs of our global communications system.

Blockchain:

Blockchain is the killer of the troll.  Especially when used in the right way.  The distributed ledger allows for all transactions of a particular block to be tracked with transparency back to the inception of that block.  By attributing blocks to individual users, other users are able to see the actions and history of that user.  While this still allows for anonymity of a user from their physical persona, it essentially creates a line of credit for their online persona.  If a user is discredited in one forum or community, that will follow them to the next forum or community they enter.  They will then have to build their credit back up from the wreckage.  Sure they could create another user, wipe the slate clean, but that would be like having no credit and I don’t know about you but I know what a bank says to someone who has no credit.

IoT: IoT is a global, diverse, and distributed collection of devices that talk to one another wirelessly and range from those that fit under our skin to those that fit in our pocket and those that take up floorspace in our kitchen.  From RFID trackers to devices on manufacturing equipment to drones and washer dryers, connected devices are being integrated into nearly every current generation hardware piece.  Wifi enablement has allowed these devices to become integral in both our daily lives and corporate processes with use cases spanning from consumer to enterprise.  What this means is more connected computing power is moving around the globe in shipping containers and in our pockets today than even existed 30 years ago.  And all of it operates wirelessly. This is the key to the distributed internet, the fact that there exists a wireless, mobile network of hardware.  Essentially this is what the phone lines were to the original internet connections.  Now they run on their own fiber or coax but the first internet connections relied on existing infrastructure just as internet 2.0 will rely on the existing infrastructure of wireless devices.

Quantum:

Quantum computing and especially quantum communications will be the second wave of internet 2.0 similarly to how dedicated fiber and coax lines were the second wave of the internet.  Quantum communication with an established network has the potential to open up communication lines to the entire current unconnected population.  However, that will take more than a few years to establish and a reliable quantum communication network cannot be expected until 2030 at the earliest.  Until then we must rely on what we have and hope that like AI and quantum computing (different from quantum communication) progresses faster than expected.

Early Adopters:

As I mentioned a few weeks ago us millennials are the generation of tech for tech’s sake.  We are early adopters, we are those who gave Google Glass a chance and are willing to implant chips in our skin for work.  That is exactly what is needed in the distributed internet.  The general concept creates better service as there is more adoption, and there needs to be trust from the early adopters that things will get better as word spreads.  So don’t be wary, initial users, it will get better.  The way mesh networks work is by bouncing off devices within a given distance of one another.  As wifi and Bluetooth ranges improve so will these applications but what will improve the services faster is just having your friends sign up and them having their friends sign up and their friends and so on.

Use Cases:

While implementation of what is considered a distributed internet is still in its infancy, there are a few companies and institutions which have begun to lay out how the overall theory would come into reality.  Today we look at RightMesh and FireChat, all of which are attempting to bring into reality the distributed internet.

On September 7 2017, RightMesh released a whitepaper outlining a distributed internet system that operates off the devices already operating globally. The company relies on its software that can be installed on any device and allows users to profit off of unused processing power and internet bandwidth.  Currently the software is in beta and available on Android and Java enabled devices, and is not available in the US or Canada. Key to the RightMesh platform is developers to create integrated apps in local languages.  The company has provided a free software development kit.  As with other mesh networks this one relies on a userbase, and gains strength and stability as that userbase increases.  As we begin to see adoption rates increase we will better understand the benefits and implications of this particular application of distributed networks.

Many users of FireChat advocated using the application for those involved in the catastrophic hurricanes of the last month as powerlines and internet connections and cell towers went down across entire major cities.  This application is a messaging app that allows users to communicate with one another across a given distance without internet connection or cell service.  It uses the Wifi or Bluetooth embedded in a devices existing hardware to send a message.  This is limited to just over 200 feet between any given device but messages are able to bounce across other devices within the network until reaching their destination, so the more people who download the app the further messages are able to travel.  This is particularly advantageous in exactly this type of disastrous situation, where people may be stuck in closed off homes with no communication available as rescue and recovery teams pass by outside.  Again, the downside to this application is the need to have mass buy in; larger communities of users means better service so the first users may experience less than satisfactory services.

Idealized World:

I am an internet idealist.  An optimist some may say.  I believe in the internet as a democratizing force that enables users through knowledge and opportunity to make something better out of their lives.  It could be the great equalizer.  Right now, it is not.  Which is unfortunate but there’s no help in moping over it.  Instead the generation of tech can get off our asses and actually do something about it, like build a better one.  With the emerging technologies we have available at our fingertips the largest problems of the current internet will not disappear over time.  They just won’t exist in the first place.  The technology won’t allow those issues to be viable in the protocol.  Of course, there will be other issues that inevitably arise, as with all things, and we can tackle those when they come too.  Remember, when the first internet emerged those on the forefront thought about it idealistically as well.  We just happen to be starting the cycle all over again 30 years later.

Thanks for spending some time with me today while I fleshed out some ideas about the distributed internet.  Speaking of fleshed out, this episode was pretty bare bones, so I will be most likely following up with another on the same topic as more information arises.  If you’re interested in being on the podcast or just talking about interesting topics with me please reach out, I love to hear about all the cool and awesome things different people do with their lives.  I hope you enjoyed this today.  Next week we discuss the swift and strong break in education and how we are digging a deeper hole for ourselves every day we stand still. Be greater than average friends!

Amanda

References:

Episode 5: Data Storage Revolution


Subscribe in a reader

Orders of Magnitude

Transcript of Audio:

Hello and welcome to another episode of Three Deviations Out. My name is Amanda and I think for a living.  Last week we talked about millennials and the technology that shapes our lives.  This week we’re breaking down the details of the data storage revolution.

Data is not an outlier.  By now we all know that.  Data is an integral part of our lives, a looming constant that determines our decisions and grows as we create outcomes and outputs.  The outlier today is not data, but what we humans store all that data on.

What: The data storage revolution is the current and previously uncharted territory of data storage innovation, driven by the need to create more and more data centers to store that data we humans keep creating.

Who: IBM, Microsoft, SanDisk, Hewlett Packard Enterprise, Fritz Pfleumer, the University of ElectroCommunications, Intel, the University of Manchester, Arizona State University, the University of Washington, Stanford University, and MIT

Why: Generally, we as humans tend to be packrats.  In the digital age that hasn’t changed and may be even more pronounced.  Photos of vacations you took years ago that you haven’t looked at, well, in years, are taking up space in that cloud drive or directly on your device.  That space isn’t arbitrary, and whether you store on your device or in the cloud there needs to be hardware to back it up.  Acres upon acres of land are owned by the US government, Amazon, Microsoft, IBM, Google, and other larger data center providers and users.  The opportunity cost is potential farmland in a world where rural hunger and food deserts are a real thing.  Or potential housing when home and rental prices are skyrocketing.  Or potential natural space in a time when some kids think a baby carrot is actually what a carrot looks like. Innovation in data storage has massive implications on the physical space we humans take up on a large and influential scale.  Also, if there isn’t innovation in the space it means that we will have to learn to start changing our nature as data piles up and we can’t build any more data centers or virtualize any more machines, forcing ourselves to cleanse of the unnecessary information we tend to cling to.  Cloud prices, right now next to nothing, would continue to increase until only the affluent and the corporate can afford additional space.  And tell me, how will I load another video of my dog chasing her tail then?

That’s the thesis today folks; data storage is an outlier because it has the potential to make or break the influence wonderful emerging tech will have, because without someplace to store it all there will be no revolution.

There will be no use case section today because there is one way to use data storage: to store data.  If I’m wrong please feel free to educate me in the comments.  Instead today’s provided program will look as such:

  • The History
  • The Trigger
  • Implications
  • New technology
  • Ideal world in Amanda’s head

 

The History:

In the 1970s when microchips began the trend of getting a makeover every 18 months, storage methods were widely left alone.  The common mindset was that there would never be the need to store so much data that it warranted significant innovation.  Enter the Internet, essentially ubiquitous access to cloud storage, and the desire of every organization and private citizen for big data analytics.  Not only is the sheer amount of data larger than it has ever been (90% of the world’s data in 2016 had been created in the previous 2 years) and expected to reach 44 trillion gigabytes by 2020, there is also increased demand for edge storage.  Edge storage is often on IoT devices that are small and track automated processes and demand for analytics and storage directly on these devices will only continue to increase as blockchain across devices becomes more prevalent. So today we focus on the data storage revolution and how hardware can keep up with the Yotta- prefix.

Storage is based on magnetization; if a particle is printed on the storage device in one direction that particle is read as a 1 and if it is printed in the other direction that particle is read as a 0.  This writing of information has essentially been kept the same for the entirety of data storage history but gotten smaller along the way.  The first magnetic tape was patented in 1928 by Fritz Pfleumer.  This style of storage wasn’t actually used for data, though, until 1951 in the Mauchly-Eckert UNIVAC I.  Key to this type of storage is that it can be overwritten, allowing for old information to be purged and new added to it.  Especially for consumer devices, which often don’t hold as sensitive data as public or private organizations, the ability to rewrite allows for both cost and space savings.

This first magnetic tape was able to hold 128 bits of data per square inch, and the first recording of data was 1,200 feet long.  A recent breakthrough by IBM has brought the density of magnetic tape to 201 gigabits per square inch.  That means one inch of this new tape would have fit on 1.57 billion inches of Pfleumer’s tape, or 24,784 miles of tape and over 83,000 books of storage.  Now all of that is able to fit in a space shorter than your pinky finger and thinner than the width of your phone.

The Trigger: 

That may seem like a lot of storage, but think of the amount of data generated daily.  Every day we humans create 2.5 quintillion bytes of data, which when written out has 17 zeros and is converted to 2,500 petabytes.  Petabyte is a new one so you may not have heard it used yet, but it is one factor larger than terabyte.  The embedded image at the top of the page is a really handy reference chart if you would like to know all current 48 orders of magnitude.  Petabyte is the second largest.  Data rules over our lives these days.  What is captured by our daily activities has repercussions on the ads we see, the loans we qualify for, and even what careers we’re considered for.  Data is king, and data storage is both the army of and history written about that king.  This has proven to be a complex relationship as we continue to create more and more data.  Constantly adding servers doesn’t quite work because the more servers the more management there is required, and the more management required the greater likelihood of a crash or a bug derailing the entire system or worse accessing sensitive data held in that storage capacity.  Also, just adding servers creates more convoluted connections between all that hardware as they all try to communicate with one another, processing or querying data stored across an entire server farm.

There are a number of ways this issue is being combatted right now.  One is middle out storage, developed by HPE and illustrated in the development of The Machine.  This allows for all data being processed by given hardware in the network to be stored in a single location, decreasing space requirements and increasing the ability to query across an entire collection of data.  Another effort includes the molecular storage of data at cold temperatures, a technique still relying on magnetism but shrinking the space needed to house the same amount of data exponentially.  As the ability to store on the molecules edges toward the temperature of liquid nitrogen, a fairly inexpensive cooling system, the scalability and commercialization of this method becomes highly viable.

Implications: 

Plans for the world’s largest data center have been proposed by Kolos Group, a US-Norwegian partnership that also operated in fishery & aquaculture, oil & gas, and power & industry markets.  The large facility is planned to meld with the landscape through efficient design and much of the expected 1,000 kilowatts of power are expected to come from renewable energy.  This is just the latest and largest in a wide range of data center types, sizes, and locations.  As we continue to create data at greater and greater scales without purging of what we created yesterday or the day before our need for storage is going to continue to increase even as storage forms become denser.  That means more space being taken up by storage facilities and more power being used to run those facilities, causing a not insignificant impact on the environment.  In an age when housing prices are a struggle for even those well employed continuing to dedicate larger amounts of land to storage only exacerbates the problem.  Data storage facilities aren’t all bad though.  High skilled workers are required for each of these centers in areas spanning from admin and management to high tech data systems capabilities to cyber and physical security.  Not only that, but often these fields are underpopulated and so offer wide opportunity for those just starting their careers or those looking to shift careers.  With greater environmental efficacy and planning to optimize space, along with continued advancements in the tech we humans have the potential to comfortably live with our desire to hoard everything, even data.

New Tech:

In lieu of use cases today, we’re going to look at some of the new technology that is being researched and tested in the data storage sector.

Molecular Storage:

Blockchain and the desire for edge computing, in addition to the massive amounts of data being created daily, are spurring the need for smaller and cheaper data storage options.  E.g., if a product is being tracked through the supply chain with blockchain enabled RFIDs, the device needs to be small and cheap enough to span across hundreds of thousands of items of varying sizes while also being able to hold data for all the blocks in the chain associated with it.  In comes our first new tech, molecular storage.  While still in the research phase, molecular storage would enable high density information to be written on a single molecule, 100 times more dense than current technology.  The downfall of molecular storage is the need to keep these molecules cold. Very cold.  Recently a University of Manchester team working on this technology made a breakthrough in bringing the required temperature from -256 C to -213 C, a decrease of 53 degrees C.  However, -213 C is still tremendously cold (the equivalent of -351.4 F) and cold enough where there is not currently effective and inexpensive cooling technology to support it.  Continued research hopes to bring molecular storage down to a cooling temperature of -193 C, which is the temperature of liquid nitrogen.  This product is relatively cheap when speaking in terms of high tech cooling systems and would be a breakthrough in the commercialization of the technology.

DNA Storage:

As weird as it sounds, let’s start printing things onto biology.  I mean, we’re already printing biology, with some 3D printers able to print organs live for transplant.  So why not print directly onto the fabric of what makes us us.  That is what researchers, including the team at the University of Washington and Northwestern University’s Center for Synthetic Biology, plan on doing. Like molecular storage, DNA storage is 3D and therefore denser.  Unlike molecular storage, DNA storage is further along in the GTM process.  While still very much in the research and development phase, there have been some highly publicized and very interesting applications of the technology.  E.g., the team at Northwestern was able to print a movie onto DNA storage in April of this year.  The movie ‘Ride On, Annie!’ of a horse running was encoded into E. coli DNA.  This specific application was encoded into an actual living organism, the E. coli cell itself.  Both DNA storage and announcements by Arizona State University researchers of an RNA constructed biological computer have significant implications in the furthering of technological and biological crossroads.

3D storage/processor combo:

Bottleneck is a real thing.  Between two chips, like a storage chip and a processing chip, bottleneck creates data latency which at scale is a scalable problem.  That sounds like gibberish you say? Try watching your Excel model struggle processing a sheet full of sumifs formulas on 700,000 rows of data.  Do you know Word?  I Excel at it.  I’m going off on tangents.  Anyways, a joint effort by Stanford and MIT have produced a solution; a 3D chip that is both a processor and a storage mechanism.  The device uses nanotechnology with carbon nanotubes instead of a silicon based material and have developed the most complex nanoelectric system to date.  Layers of logic and storage are woven together to create a web of detailed and complicated connections.

Advances in tape storage:

Lastly I want to again touch on tape storage.  As I mentioned earlier, IBM has recently announced a revolutionary tape that holds 330 terabytes in a length about as long as my pinky (pinky size may vary).  Tape technology is the oldest form of storing data and has continued to evolve from the outset.  Expect to see more from this legacy tech in the future, it’s not going anywhere.

Idealized World: 

We humans create a lot of data.  Especially as the generations who grew up on screens start to overtake those who didn’t, data consumption and creation will continue to grow.  It has to go somewhere and someone has to pay for it.  I see a variety of these data models I spoke about today, along with a number of other emerging and legacy technologies, to optimize our space requirements.  I also imagine that we as a society will start learning how to parse down on the data we keep, understanding more accurately what will bring use and what will sit in the back of the closet collecting dust.  What I know for sure is that if we continue this upward spiral into the data dimension we will get lost in cyberspace and not realize the space around us has become exponentially more cluttered with hardware.

Thanks for listening today guys, this is everything I currently have to say about data storage.  As always, comment below for any request, recommendations, corrections, updates, and overall bashing.  Next week we dive into the murky waters of the distributed internet, so buckle up.  Til then go do something greater than average.

Amanda

References:

Episode 4: Millennials and Tech

Subscribe in a reader

Transcript of Audio:

Hello, and welcome back to Three Deviations Out.  Last week we talked about quantum computing and all the weird crazy fascinating earth shattering things that hard tech can do.  This week we’re going to pull back into soft tech and the people who use it, namely millennials.  Trigger warning, I’ll be talking in averages here so this may not all apply to you.  Let’s dive into what makes our generation just so special, like why we need to hear that we’re so special.

88% of internet users 18 – 29 are on Facebook, compared to an average of 79%.  59% of that same age group is on Instagram with an overall average of 32%, and 36% are on Twitter compared to 24% of all online adults.  As an age group, we’re even more likely to be using LinkedIn with the demographic at 34% compared to an overall of 29%.  Not only are we as millennials, which actually currently represents anyone from the ages 22 – 37, more likely to have social media presence we also spend a lot more time on them.  A dscout study found that on average mobile phone owners touch their phone 2,617 times a day, with heavy users reaching an average of 5,427 touches a day amounting to 225 average daily minutes on the phone.  Guess where our age group falls on that spectrum…dingdingding right up there at the top.

We are a generation that grew up on the internet, that has always had information directly at our fingertips either at school, at the library, or in our homes.  I know, I’m speaking in generalities.  There are many people within this age group that for whatever reason, whether they grew up on a Himalayan mountaintop or rural New York State and Internet penetration didn’t happen until after adolescence, or you are on the upper end of the age spectrum of the group and don’t like being lumped in with those of us who never had to use dial-up.  Now, however, technology and especially the internet are unavoidable aspects of daily life driving social change and nightly hookups and whatever else you feel like doing on your phone or laptop or tablet or desktop.  As a generation that grew up with literally the entire world of information at our fingertips we’ve turned out a bit different than those who came before us, and there’s no shortage of people trying to point that out.  What I want to get into in today’s episode is where the hype ends and actual evidence shows us being different from those who came before us, especially because of the influence all this technology has had on us.

What: The technology influencing our daily lives, including social media (Facebook, YouTube, Twitter, Pinterest, LinkedIn, etc.), crowdsourcing, forums (Reddit, Quora, StackOverflow, etc.), financial support (Mint, Betterment, Acorns, Robinhood, etc.), ecommerce (Amazon, Etsy, Alibaba etc.) and the things we see it all on – screens (phones, tablets, ereaders, laptops, desktops, etc.).

Who: Millennials and our codependent relationship with technology.  Whether this relationship is good or bad, it exists and it is influencing us in a number of ways.  We represent a group 22 – 37 who grew up on tech, had cell phones in high school or earlier (unless your parents were really strict) and spend an outrageous amount on student loans.

Why: Millennials are the first generation to have unlimited knowledge and connection at our fingertips anytime we want.  And yes, we as an age group are not the only ones who have access to that phenomenon.  However, we are the first to have grown up with it.  We’re the first generation where interaction didn’t end when you came home from school because you could get online and chat with your friends on AIM.  Remember AIM?  This has caused movements of all sort with potentially the largest being the Arab Spring, a slew of revolutions protesting horrific treatment by oppressive regimes driven by young people and their connection with the rest of the world through social media.

Today we will cover:

  • Why we’re different
  • How we’re different
  • Use Cases
  • Ideal world in Amanda’s head

Why We’re Different

Screen Time:

In my two-person apartment there exists 11 screens of varying sizes, 8 of which are used on a regular, almost daily, basis.  This includes laptops, phones, tablets, extra monitors, and a projector.  On the average weekday, I spend anywhere from 10 – 15 hours staring at screens.  Granted, 8 of those hours are work, where my job requires me to be at a computer, and I do spend some time outside work reading articles for this time I spend with all of you. Still though, that’s a lot of time in front of a screen.  I’m not alone either.  Consider my stat from earlier; the average millennial spends 225 minutes a day touching their phone.  That’s nearly 4 hours a day swiping right or whatever it is you like to do with your screen.  Far and beyond any generation before us we are addicted and tied to our screens.

Access to Information: 

Because I have a computer in my pocket constantly, I am always right.  Or at least that’s the theory.  Good or bad we are the generation of constant access, both to one another and to the answer of any question we may have.  There are even now little voices you can ask just to Google things for you (e.g. Siri, are you a human?).  No longer does anyone buy sets of encyclopedias unless they want to look like a distinguished gentlemen from 1910.  I don’t know if I have to go on about this for very long, because I don’t think anyone is arguing that we as a generation and as a species as a whole are able to know more now than we ever have.  However, what that has done is made it so we actually don’t know any more and in many cases know less.  Results have begun to indicate that younger generations (us millennials and the iGens that come after us) are actually retaining less knowledge because we are instead able to access it anywhere and anytime we want at the press of a button.  Think about it – how many phone numbers do you have memorized?  Can you list off the 50 states?  What did you eat for lunch last Wednesday?  If you give me a minute I can probably look through my calendar and tell you.

Overstimulation:

Television screens, video games, cell phones…this generation’s everyday life is filled with constant pings, vibrations, and push notifications.  The sheer amount of light exposure is enough to drive you nuts, and is influencing things that range from our ability to interact socially to our sleep patterns and overall body chemistry.  Every time you get a ping on your phone letting you know you’ve been liked or commented or re-anythinged a little bit of dopamine is released by your brain.  Dopamine is the chemical released in the pleasure centers of our brains and is also very effective and addictive in narcotic form. And just like those who are chasing the dragon, our generation of social media junkies are trying to just find a high as good as the first time someone liked the picture of your dog in a bow tie, even if it was only a bot.  We allow this to interfere with our lives, twitching a little bit each time the buzz of a phone goes off wondering ‘is that mine?’.  It also drives us to some interesting pastimes, wavering on two very heavy extremes including extreme overstimulation (think Electric Forest, LollaPalooza, Coachella) or extreme understimulation (yoga retreats, backpacking in the Andes, chaperoned ‘descreen’ time).  These extremes have become a reality in our lives due to the normality of constantly being ready to read the next 140 characters.

How We’re Different:

In Our Work:

Millennials are lazy, constantly need praise, don’t understand paying our dues, and are too aggressive about raises.  At least that’s what I’ve heard about us from those who are older or higher up in organizations.  Other ways of saying these are efficient, goal oriented, driven, and understanding of our value.  Many of us were asked since kindergarten what we wanted to do with our lives and lo and behold we actually thought about it.  This career determination, whether based on what we want to do in our work or what we want to get out of it, has created a revolution in the workplace.  A number of organizations have caught on to this and changed, because as a whole we are great resources for any organization.  Those which aren’t adapting are seeing their workforce age into retirement with no influx of new talent.

  • Benefits
  • Culture
  • Optimization
Benefits:

One key way we as a generation are changing the workplace is in the benefits we request.  We often look for perks beyond salary in compensation.  Of particular import are vacation time, childcare, flexible hours, and work-from-home availability.  No longer is it odd to see a company with an unlimited vacation policy.  Another trend that has caught on is the ability to bring your dog to work.  More affluent workplaces or those trying to attract younger talent go all out with the extra benefits including extended parental leave, on-site fitness facilities, sports and entertainment events, pet health insurance, tuition reimbursement, cultural memberships, massages, and fully staffed free cafeterias.  And that only scratches the surface.  In 2016, job seekers, the largest percentage of which are millennials, reported 57% of the time that benefits and perks were among their top considerations when evaluating an offer.  I don’t know about you but I would love to have my fluff monster in the office with me.

Culture:

Some may argue that culture is a benefit in the workplace but I consider them two distinct entities.  While benefits are explicitly included in a compensation package culture is how you and the humans you work with interact on a daily basis, especially as our offices become more collaborative than ever.  One of the most significant changes that our generation is forcing onto companies is actually just that.  We are a cooperative age group, one that considers the sum of the parts to be greater than individual contribution.  As we’ve embraced open-sourced software, crowdsourcing, and social media movements we’ve shown again and again what great things can be done through cooperation.  Unfortunately, that often clashes with the traditional business structure that is strictly and largely hierarchical.  While many companies are attempting to increase the cooperation of their employee’s, traditional perceptions about business roles can cause serious cogs in the machine.  If you’re trying to bring more workplace cooperation into your employees’ environment I suggest being open and honest with them from the start, and begin the cooperation then and there.  Ask your group or team or unit how they want to go about this change, whether people are comfortable with it, what concerns they may have.  Buy in is always most effective when everyone has some skin in the game.  If you’re an employee working through a transition to cooperation, speak up.  The only way this works for everyone is if everyone is involved.  And if you don’t want to cooperate that’s fine, just know you’re falling to the way of the dinosaurs and floppy disks.

Optimization:

One big driver behind our generation’s desire to cooperate is the need to feel like what we’re doing matters.  Some like to deem us the generation of the participation trophy, but we aren’t driven to empty goals.  What we really long for is to feel purpose in our work.  If it seems like the things we do on a daily basis aren’t bringing us any closer to our goals, we aren’t afraid to cut our loses and move on.  60% of millennials are open to a new job opportunity and 55% say they are unengaged at work. While we may not be the job-hopping generation that seemed to be the stereotype up until last year, our sentiment about quitting a job that doesn’t align to our goals and values is higher than generations past.

In Our Play:

Social:

From glamping to hot yoga retreats to Electric Forest, we millennials like to let loose a little different than our parents did.  But as the generation of tech we’ve also come to embrace the idea of ‘descreening’.  In general, we’ve accepted the fact that technology and especially social media or work connectedness surround us on a daily basis all hours of night and day.  So when we have that chance to take a vacation and get away from it all, we actually want to get away from it.  A new industry has begun to thrive that actively cuts vacationers off from technology, and we’re beginning to learn to shut the screen down at dinner, at concerts, in museums, or elsewhere.

Service:

As a generation, we are more altruistic in our ways than those who came before us even though we are less likely to belong to an organized religion.  I would argue that is because we have more opportunities to give back to our community outside of religion than those who came before us, but this also relies on the amount of wealth, free time, and exposure we have.  We carry all of these in significant abundance over our parents and especially their parents, unless you’re descended from royalty or robber barons.  We aren’t just all consumed by the latest video game or Pharrell album.  On average, we each give $600 per year to some charitable cause and love to use our social media accounts to discuss the relevant social justice actions in our lives.  In fact, if you looked at the top trending Twitter moments last Wednesday they included: Charlottesville, the review of national monuments, the melting of Alaska’s permafrost, and Women’s Hour. Through technology we have become more connected to one another and those who are empathetic have become able to see the struggles of others around us, even if we ourselves are not personally influenced by hardship.  The ability of social media to tell stories of anyone with an internet connection has created a community that spans beyond physical boundaries and brings you directly to your tribe in the online community.  It allows you to feel special while also not feeling alone bringing a brave confidence to a generation not afraid of speaking up.  I would argue that’s a good thing.

Use cases:

Things we’ve seen so far (this week the use cases are things being done by millennials that would not have been conceived by any previous generation).

There are a number of well-known millennials who are doing things that have high influence across a spectrum of industries.  Take Emma Watson, who speaks out on behalf of the UN for equality for all, or Mark Zuckerberg who built an empire on social media and is now using that empire to attempt to cure all disease.  There are plenty of unknown faces though that are doing great things to better the reputation of us lazy millennials.  I’ll cover a few both known and less known here but there are plenty more in my resources list and out there in the world.

Alex Momont was a student at the Delft University of Technology who in 2014 had a passion for drones and wanted to find a positive use for the technology.  Partnering with Living Tomorrow and the University Hospital of Ghent, Momant developed the ambulance drone to deliver emergency supplies to a victim ahead of EMT arrival.  Today ambulance drones are in used in a number of cities and have the potential to cut down arrival time of supplies by an average of 16 minutes.  Not only does the drone deliver materials, it also gives instructions to those around the victim on how to use the supplies available.  Cardiac arrest victims most often need to be cared for within 4 – 10 minutes, so cutting any arrival time by an average of 16 minutes has high potential to save lives.  I would count that as a win.

Taylor Swift is now a household name, and some may be rolling their eyes when they hear that name come out of my mouth.  But Taylor Swift is a millennial who understands the power of technology, particularly social media, to further her career and her message. After a long hiatus, the musician released a new song on YouTube on Aug 24, 2017.  By Aug 27, 2017 the video had 35.4 million views and 206,106 comments.  She boasts 102 million followers on Instagram and 85.4 million followers on Twitter.  In 2015 Ms. Swift posted a letter directed towards Apple Music on her Tumblr account derailing the company for not paying artists during the company’s 30-day free trial of the product and pulling all of her music until the company agreed to pay artists.  Most recently the artist has gone through a very public trial in which she was accused of improper involvement of the firing of a disk jockey and she instead of backing down countersued accusing the other party of sexual assault.  The proceedings, which remained very public throughout, concluded Aug 14, 2017 and sided in favor of Swift with an award of $1 meant to be symbolic and support victims of sexual assault cases that had not been made public.  While there is no doubt that Taylor Swift would have been a great artist without social media, the things she has done and the acclaim she has found would have been impossible for any artist before her attempting to speak directly to her fans.

Oscar Schwartz is a poet who is asking the world if computers can write poetry.  A writer, researcher, and teacher in Darwin, Australia, Schwartz is looking into what it means to be human and how interaction with technology changes that.  Right now, he is running a project that looks into 7 of the jobs least likely to be automated.  What he wants to know is what makes up those occupations, how they might be automated (if they can be) and if they are automated what creative bits may be lost.  He and partners have started the site Bot or Not which uses the Turing test and allows users to determine whether poetry examples are created by a human or an algorithm.  Continuously questioning what the difference between human and computer is, the researcher believes the computer is a mirror reflecting human back on human.  As technology becomes even more an integral part of our lives, this continuous reflection on what makes us human and what makes a computer a computer may be exactly what we need.

Ideal World:

We millennials know what we’re doing just about as much as everyone else.  I think the difference lies in the fact that we want to change that.  Maybe I’m young and naïve and we’re all young and naïve.  There are better ways of doing things though and with the access to technology we now have those better ways are more possible than ever.  In my mind, a perfect world includes the cooperation of millennials and the generations before and after to use the brilliant tools we have at our fingertips to get us that much closer to zero marginal cost.  That’s what we all want, right?  More leisure time without having to sacrifice standard of living.  Between expertise of the generation and the amazing advancements in things like AI, blockchain, and quantum computing I believe that we can get close within our lifetimes.

That’s it for today everyone.  I know that was a bit different what I’ve done the last few weeks but trust me, it fits into the puzzle.  Every society needs early adopters and now there is a whole generation of us.  Next week we talk about the data storage revolution, because where else are we going to put all this information we’re creating? Til next week!

All the best,

Amanda

 

References:

Episode 3: Quantum Computing

Subscribe in a reader

Transcript of Audio:

Hello and welcome to episode 3 of Three Deviations Out.  If you haven’t been following along, I’m Amanda and I love outliers.  I have a passionate belief that outliers of any kind are the sparks of this world, good or bad.  Like a bit of sriracha on literally almost anything, they make life a little spicier.  Last week we talked about blockchain and how it is so much more than a digital currency for drug traffickers and people with weird fetishes.  This week we’re going to talk about quantum computing; the tech, the players, the impact.  First, let’s start with the key points:

What: Quantum computing is the idea that using quantum bits, qubits, instead of regular binary bits a user can take advantage of unique attributes of electrons when they become entangled and are in superposition. It’s alright if you didn’t understand some or a number of the words in that last sentence, I didn’t either when I first started looking into this technology.  We’ll be going over the various quantum properties here in a minute.

Who: IBM, D-Wave, Google, Accenture, Atos, Rigetti Computing, NASA, The Republic of China, Russia, The University of New South Wales, and every researcher who has committed their lives to studies that further quantum theory including Albert Einstein, who first noted quantum properties of small particles, and Erwin Schrödinger, well known for his famous cat.

Why: Quantum computers have the potential to be highly influential in operational efficiency, chemical calculation, and machine learning.  To put that in perspective, those three areas cover nearly all aspects of computing outside of productivity applications and web browsing.  Not to say that quantum computing won’t influence the Internet.  You just won’t use a quantum processor browsing for kitchen supplies on Amazon.  Everything from climate change and world hunger to cancer research and the eradication of disease could be addressed through the power of quantum processors.  That is why I see quantum computing as an outlier.  This is a technology that could exponentially advance our society in ways that only a handful of major technological breakthroughs in recorded human history have.

Okay.  So, now that everyone knows exactly what quantum computing is and how to use it and how to code while taking superposition into account, let’s get into the complicated stuff.  Raise your hand if you have any questions.  None?  Great, let’s move on.  I will warn you that in the support there will be fewer use cases and less hard evidence than I was able to present with blockchain, simply because on the product development cycle quantum computing is an infant while blockchain is more of an adolescent.  Keeping that in mind I want to hit a few points:

  • How it works
  • The differences in hardware/GTM approaches
  • Potential implications
  • Use cases: Things we’ve seen so far
  • The weird stuff, because yes, it does get weirder
  • Idealized world in Amanda’s head

The first thing I’ll note is that in the great scheme of things, quantum processors are not an outlier.  Like, if we look at all of human history this is what we tend to do.  We develop tools that make our lives easier or enable us to do things faster.  The impact any of those tools has is its ability to scale and the amount of people it touches.  Last week I compared blockchain to the widespread adoption of the printing press.  Quantum is more similarly associated to the widespread adoption of agriculture.  Yeah sure, foraging for berries and hunting antelope with spears got the job done but isn’t it just so much easier to sit back feeding this cow hay until its big enough to eat?  Calculations that would have taken hundreds of years on a classical (binary) computer, even one that would be considered a supercomputer, would take days, hours, or even minutes with a quantum computer that has reached supremacy.  Supremacy is the ability of a quantum computer to outperform the highest functioning classical computer, thought to be reached at 50 qubits.  Google has pledged to have a 50-qubit processor by the end of 2017, and a research group led by Harvard professor Mikhail Lunkin announced in June they were the first to build at 51-qubit universal quantum processor.

How it Works:

Quantum entanglement is a property that occurs when the attributes of one particle gives information about the attributes of another particle.  This is important because the nature of quantum particles means that observing them implicitly changes them.  Using this theory, we can say that a particle both is and is not any given thing at any given time. This idea, that a particle could exist in multiple states at once, is superposition.  So, the exchange of the information about particle attributes is entanglement and the idea that these particles exist in multiple states at once is superposition.  Both of these states need to be achievable for a quantum computer to function, along with a number of other variables.  When these states are reached with a quantum processor, the hardware is able to do exponentially more calculations at once than a traditional computer with the equivalent number of bits due to the two properties we just discussed.  Consider it this way: A traditional bit thinks in binary which is either one or zero.  A qubit can consider either one or zero or one and zero all at the same time.  These properties are influential across a number of applications including cryptography, telecommunications, teleportation of photons and the decentralized internet, but today we will mostly stick to processors.

Differences in hardware/GTM approaches:

There are a number of quantum computing companies that span services, hardware, and software.  Today I will be featuring IBM, D-Wave, and Rigetti Computing because these are the companies I am familiar with.  If you are interested in other quantum computing companies and you think I should know about them please feel free to leave a comment.  If you are part of an effort working on some aspect of quantum computing and want to talk shop, I’d love to.  Because of the depth and overall newness of this topic I expect I will cover it more than once and would love to have guests next time.  IBM, D-Wave, and Rigetti are all in the quantum processor business but each approach both hardware and GTM in different ways.

D-Wave currently has a 2,000-qubit version of its annealing processor available for the low low price of $15M with an open source quantum language, Qbsolv, available on Github.  The company currently only offers an on-premises hardware option with cloud access available in certain instances, but not for public use.  An annealing computer is a type of quantum processor that solves a very finite set of problems.  Also, instead of harnessing the power of quantum mechanics it is instead just along for the ride.  Essentially it is like the difference in using a broken horse to plow a field or trying to hook a wild mustang up to the plow and make that work.

IBM is working with a slightly different GTM and hardware approach.  Big Blue currently has two versions of their universal quantum computer, a 16-qubit available for public use and a 17-qubit for only commercial use.  Both are available over the cloud and don’t worry, you don’t have to learn to code in quantum just yet.  The company has built an API that allows you to develop in Python directly on the quantum platform, and there is a growing community on Github using the curated tool kit.  However, if you did happen to be interested in learning a quantum language IBM may not be the place for you as their root language code has not yet been open sourced.  As far as I am aware IBM has not commercialized a hardware package, making their quantum processor only available on the cloud.

Finally, Rigetti Computing is like the cooler younger sibling of the quantum computing heavy hitters.  The company started only a few years ago, going through Y Combinator Startup School, and now resides in a warehouse in Berkeley CA.  Offering quantum computing through the cloud, Rigetti has both a Python based API and an open source quantum language called Quil for their beta program, Forest 1.0.  Users are able to build and simulate algorithms on 30 qubits along with running them on an actual quantum chip.  The company has developed 8 qubit chips as of August 2017 and are using a new two-qubit gate scheme making the abilities of the chip more scalable than previous iterations.  Rigetti is currently working with universal quantum chips, the most powerful type of quantum processor.

Potential implications:

I want to preface this section of the recording by saying that quantum computing is still in its infancy especially when compared to technology like microchip processors.  So, while potential implications are well researched, they are well researched only to the extent that a technology until the last few years existed only in peer reviewed papers can be well researched.  With that asterisk, I will say that the potential of quantum computing is large.  What we’ve seen so far is that the technology is able to more effectively calculate operational and chemical algorithms than classical computers.  Operational calculations span from traffic optimization to supply chain to law enforcement enablement algorithms.  Chemical calculations include determining the best way to influence climate change, how to best grow and distribute crops to influence global hunger, and genome sequencing that could influence any number of disease prevention.  This is because, according to Andrea Morello, “the number of operations it takes to get to a result is exponentially smaller”. That means that any large variable algorithm, any you can think of, can be made faster and more effective with quantum computers.  The concept becomes difficult because there is little testing and real-world application to the theory, but as you will discover in Amanda’s idealized world I think the potential is high.

Something quantum computers will be great for and have already proven the ability to do is break encryption.  The basis of most encryption is factoring using enough variables that classical computers could never brute force their way in.  Quantum computers however have very different calculation approaches through the superposition attribute.  Encryption of today would be and currently is no problem for a quantum computer.  In fact, quantum processors have already solved Shor’s algorithm, a factoring algorithm that cannot be solved by classical computers and was comparatively a breeze for a quantum processor.  What this means is that current encryption is useless with quantum processor availability.  And because there are quantum cloud offerings, barriers to entry of achieving encryption breaking techniques does not include the steep prices of hardware.  There are companies such as Post-Quantum and ISARA Corporation that are attempting to safe guard from potential quantum attacks.  So far there has yet to be a hack that is specifically attributed to quantum decryption but I only have public knowledge available to me.  If you know differently I would love to hear about it, please reach out through my contact page, comments, or the number of social accounts I have.  For now, the best approach I recommend is try your darnedest to go post-quantum as quickly as possible if you consider yourself a high-level target.  Otherwise, I don’t see wide scale quantum hacking in the ilk of WannaCry or other massive malware to happen soon.  For all our sakes let’s hope I’m right.

Another area where quantum computing has potential impact is the field of Artificial Intelligence.  Some humans may find this terrifying but personally I find it enthralling.  I’ll spend an entire episode on AI but I’ll take a minute now to cover some of the things that have been accomplished already before getting into how quantum will influence the space.  Most people know Watson, Siri, and Alexa.  These are all artificial intelligence programs that, I don’t know about you, but the humans I know talk about as if they are people themselves.  Instead of calling Siri ‘it’ she is referenced as a female.  There is the sentiment about wanting to ‘meet’ Watson, as if you could actually shake his hand.  Beyond the programs whose names we know, there are the robo-dials that are oh so common now, an artificial intelligence program meant to speak over the phone often as a cold call salesperson or customer support.  AI does more than just talk to us though.  There are new methods of sleep study that use AI to be more effective, along with medical treatment, handwriting and facial recognition, journalism, and creativity.  I know that sounds farfetched but just check out the piano piece in the link below built by an AI.  AI uses deep learning to understand the complex theory and neural networks designed to ‘think’ like the human brain.  Quantum has the potential to increase exponentially the amount of information an AI program can process at a time because its very nature is processing and understanding variables.  Being able to run more variables that can have more cross-relationships across the network will only increase the efficiency of machine learning, giving us humans more effective and efficient machines that can make decisions better than we can in almost innumerable fields.  We will see the first great novel written by an AI program, and it won’t be uncommon to see an AI program on a corporate or philanthropic board.  Many knew the AI revolution was coming, and in some ways, it has already snuck into our lives.  Quantum computing is just speeding up that integration, and in my opinion, that is just fantastic.

The last area I will touch on is operational efficiency which you already know from last week is a personal favorite of mine, mostly because I’m lazy and don’t want to exert any extra energy if I don’t have to.  Quantum computing has already shown a few uses in operational efficiency that reach beyond potential into the real world.  Operational algorithms are perfect for quantum computing because they start out with a lot of variables and try to define every conceivable relationship between any of those variables to create the most optimized process.  Because quantum computing is able to define relationships between variables by the qubits equaling 1, 0, and 1 or 0 at the same time, those variable relationships can be determined much quicker to allow for some significant real-life applications.  One of the use cases we will be getting into revolve around this idea of operational efficiency in real time, and allow for better use of a human’s resources to get a job done because there are fewer or more effective steps in the process than there were before.

Use cases:

So, on that note we will get into those use cases I talked about.  We’ll be covering two today:

  • Real time traffic route optimization
  • Unsupervised machine learning

D-Wave and Volkswagen teamed up to understand how quantum computing could influence traffic route optimization, a process with a high number of variables that change at a high rate. Using a dataset of 418 taxis en route to the Beijing airport, the team built an algorithm of 1,254 logical variables to represent the problem and optimized by running a hybrid classical/quantum solution.  Before the optimization was run there was a relatively small number of routes being taken with heavy congestion on nearly all.  Correcting for queue wait times, the researchers concluded that with a dedicated D-Wave quantum annealing processor the route optimization could be run in 22 seconds across 50 randomized routes to clear congestion for the 418 vehicles on their way to the airport.  The amount of time needed to complete the problem is expected to diminish as the number of qubits in a machine increase.  The group of researchers plans to continue their work in traffic optimization with quantum, as well as understanding other real-world applications of quantum processing technology, so keep your eyes peeled and I will try to keep you as up to date as possible.

In the first quarter of 2017, four researchers at Los Alamos University tested the influence of quantum annealing on machine learning.  The researchers tested their hypothesis, that matrix factorizations are more efficient with quantum chips than classical chips, on D-Wave hardware.  They tested their experiment both on the D-Wave quantum annealer as well as two different classical computers.  Each processor was attempting to process 10, 100, and 1,000 faces across a number of tests to develop a facial recognition program.  After all tests were run the team concluded that while the quantum processor was better at very fast solutions, the ability for it to actually reach those solutions were sporadic and while many times were very fast there was a large variability of time with some processing tests taking up to 10 minutes. The classical computers on the other hand were at times slower than the quantum processor but were more consistent in processing time across all tests.  The group concluded by saying that there was really no clear winner because while a modification to the calculation had the potential to make the classical computer quicker, they also noted the relative immaturity of quantum technology and that with work both on the algorithm used and the D-Wave hardware speed and consistency could be improved.

There you have it, two very technical use cases of quantum computing. Most at the moment are being reported in peer-reviewed papers, so sites like arxiv.org are great for browsing if you are interested in keeping up to date on new quantum use cases.  If you know of any other use cases that have been tested, please comment about them I would love to learn more.  In the meantime, I will try to keep you up to date on the latest articles both here and through my Twitter feed, @greaterthanxbar.

The weird stuff:

The concept of quantum computing is already kind of weird, with the idea that particles can communicate with each other and be in any number of places at the same time.  However, these qualities allow for much weirder things than just quantum computing.  Three such technologies include quantum teleportation, quantum communication, and the quantum Internet.

Quantum teleportation relies heavily on the attribute of entanglement.  Currently experiments and tests have been successful in teleporting photons of light across long distances.  A hang up with quantum teleportation is that quantum energy is easily disrupted and moving through the relatively dense and noisy air of the planet means a photon can only go so far before it’s distracted.  A research group associated with the Republic of China has recently tried to subvert this distance challenge by launching a satellite designed specifically for use in quantum mechanics and bouncing the entangled photons off of that.  This allowed the photons to travel 870 miles, decimating the previous record of just over 60 miles.  Granted, this test is just proof of concept and you won’t be making the daily commute Star Trek style anytime soon but the relatively quick growth in the technology is exciting nonetheless.

Quantum communication, while similar to teleportation in that it relies on entanglement properties and light photons, is a bit more approachable.  In fact, it is being implemented in Jinan, Beijing, and Shanghai as China strives to be a global leader in next gen tech.  The most prominent driver of quantum communication is the security benefits that come from the quantum attributes.  Because quantum states are fragile, if a communication is intercepted the state is interrupted and this change can be seen by any user with access, revealing the invader.  There are also technologies such as quantum key distribution that are considered post-quantum security measures.  So, while quantum communication operates on fiber like boring ol’ telecom of today, the implications it could have on secure communication in a quantum world could be far reaching.

The last weird thing we’ll talk about, besides what goes on in my brain, is quantum internet.  I’m not going to spend a ton pf time here because, like AI, in a few weeks I’ll be spending an entire episode on how this and a number of other technologies can join forces to build a new and better Internet.  You heard that right, this isn’t just something that happens on hilarious tech sitcoms.  For now I’ll keep it short and simple.  Right now, the internet uses radio waves to send information around the world.  A quantum internet would use quantum signals through a network of entangled particles.  This has implications in the speed of quantum processing over the cloud and increasing the security of sensitive data.  You won’t be using it for everything though, like listening to this podcast or checking out Trump’s latest social media blunder.  Estimates state a global network will be functional by 2030.

My Idealized World:

There are still a lot of things we don’t know about how quantum computing could change our lives and disrupt problems written off as unanswerable.  There are few applications written for quantum computers and even fewer programmers who develop directly in a quantum language.  Universal quantum processors have only reached supremacy with a few research groups so far, though I expect more will follow soon.  To me, the things I’ve seen accomplished in testing and what I know about the underlying hardware and theory is enough to convince me of the potential merits.  As I see it, quantum processors will begin uncovering answers to both every day and global problems.  Not too long from now you may have an app on your phone being fed traffic advice translated from a quantum computer.  Or you may find that the weather forecasts you see are both more accurate and more detailed.  A drug commercial may come on the television and you note the striking lack of side effects because chemical compositions will be able to be synthesized more effectively.  Changes will at first be small, because often those problems will be the easiest to solve and implement.  Not long after that though you’ll see both private companies and public institutions using the technology to unlock the secrets of everything from how best to solve world hunger to the most effective strategies of war.

That’s all for today folks.  I hope it was informative and enjoyable.  Leave any comments, questions, concerns, or corrections in the section at the bottom of the page.  Next week we talk about millennials and their relationship with technology.  Disclaimer: I’m one of them.

Have a greater than average week!

Amanda

 

References:

Episode 2: Blockchain

Subscribe in a reader

Transcript of Audio:

Blockchain is the use of a distributed ledger with cryptographic security settings that allow for transparent record keeping validated through consensus.  If you would like to get deeper into the technological aspects of blockchain, there is a list of references below you are free to peruse.  Blockchain in the past few years has become known as the technology driving bitcoin and a slew of other cryptocurrencies that have emerged.  The ICO, or Initial Coin Offering, has become particularly popular this year with nearly 130 closed ICOs for the year and nearly 150 either currently in the field or on the horizon.  However, there are so many applications to blockchain that are just beginning implementation, so while I will touch on cryptocurrencies today they will not be the focus of the discussion.

Topics we will cover today:

  • Why blockchain works.
  • How cryptocurrencies replace cash
  • Security implications around sensitive data
  • Use cases: Things we’ve seen so far
  • An idealized future from Amanda’s head

As always, if you think I’m missing a notable point or have questions, comment below and I will try to follow up.  But before we get into any of that let’s cover the key points:

What: Today we are talking about blockchain, a distributed ledger system that uses cryptographic algorithms to ensure a secure and transparent method for keeping records.

Who: Big players in blockchain include Hyperledger (a consortium of companies including Accenture, Cisco, Fujitsu, IBM, Intel, JP Morgan, R3, and SAP among others), the Republic of Georgia with partner Bitfury, Consensus Systems, and of course Satoshi Nakamoto and the community of miners, developers, and users on bitcoin and any altcoins.  I will touch on a few of these in my use cases later on.

Why: While cryptocurrencies have gained media attention as of late, the idea of full scale distributed consensus is a way off.  I would argue that as the generation of Uber and AirBNB millennials were taught to share more effectively than others but adoption of blockchain large scale is a bit more of a leap than that both psychologically and technologically.  Also, blockchain has the potential to make real waves across all sorts of industries in a dramatic way. Think what the printing press did for the documentation of records, or what the assembly line did for operational efficiency.  If and when it gains traction, blockchain has true change potential.  That is why I label it as an outlier.

So that’s my thesis: Blockchain is an outlier because it is an agent for global socioeconomic change.  Believe me?  If not, keep listening.  If you do, keep listening anyways.  My dad always says ‘Don’t start something you don’t plan on finishing.’ Ah parents.

Why Blockchain Works:

First, let’s talk about why blockchain works, because anything that is going to be a global agent of change has to work before it’s a global anything.  There are three reasons this outlier holds water:

It works off of systems we humans already use, namely trust.  Any transaction you make and any record you use is valid because you trust it or you trust the institution it comes from.  You trust that the credit cards used to buy fidget spinners at your specialty fidget spinner emporium are authorized by a bank and that bank is backed by the government and you trust the government so sure you will accept this swipe of a plastic card in return for this strange plastic widget.  Or when you go to the records office to receive the death certificate for great aunt Mae, you trust the time listed is correct because a doctor signed it and it was certified by the government and you trust both of them.  Or you don’t, I don’t know you.  The point is, blockchain takes that trust and spans it across a sweeping community, and oh yeah adds proof of work (defined protocols we’ll get into in a minute) in case you trust no one.  So, you no longer have to trust in a single person or institution, you just have to trust that based on the defined protocols all of these people can’t all be wrong.

It has built in security.  Due to the very nature of blockchain, built through proof of work, each block is both publicly transparent and privately held.  What that means is that anyone can see the records of a particular block but that block is still owned by one person and can only be changed or transacted by one person.  Proof of work comes from the need to ‘mine’ the blocks, solving cryptographic puzzles through brute force processing.  When completed, this results in a block, a public key, and a private key.  Both keys are unique to the individual block and used together decrypt and allow access to that user’s block.  A public key can be used alone by any user in the system to verify the validity of that block.  Private keys, if kept private by the owner, will keep any stored data within the block the property of the owner.  This means that all blocks are secure through asymmetrical encryption and much more secure than systems without it.

It creates industry agnostic operational efficiencies.  Due to the inherent trust found in blockchain protocols, there can be a lot of disposing of the middleman so to speak.  If you’ve ever gone through real estate and/or divorce proceedings (so many similarities) frustrated as your funds sit in escrow while the lawyers figure out what’s what, you could be helped by blockchain.  Or if you’re sick of waiting in line at the county clerk’s office because your dog ate your social security card and you need a new one.  Lengthy, costly, and frankly annoying administrative tasks such as these are sidestepped through blockchain use.  So are things such as verification of goods and delivery process through blockchain.  Imagine never having to wait for that fidget spinner you ordered ever again because someone lost something somewhere but no one knows who or what or where and now there’s infighting and I just want my fidget spinner oh god!  Exactly.  So as the world and our demands on it speed up let’s try to let fewer things fall through the cracks and maybe save a little (a lot) of money in the meantime.

How Cryptocurrencies Replace Cash:

Okay.  So now that you know why it works let’s talk about cash.  Cryptocurrencies have been the most public use of blockchain, but really what bitcoin and altcoins are is the infrastructure behind all the great things that can be done with blockchain.  Remember that trust we were talking about earlier?  Well that applies to money too.  It’s been at least more than a century since currency has been based on any type of physical wealth.  A US dollar is worth a US dollar because the US government says so.  That goes for any currency across the globe.  The market dictates just how much that dollar is worth, for example if it can buy one egg or twelve, but it is a US dollar because it was minted by the US treasury and is backed by the full faith and credit of the US government.  The only difference with a cryptocurrency is that instead of a singular backer there is a distributed consensus over the validity of a block.  Not only does that create a more transparent and less costly system, blocks are much much much more difficult to counterfeit than say, a US dollar bill.  A number of countries including Singapore and Australia are looking into the creation of their own cryptocurrency but what I see as a more likely outcome is coins used in markets based on the adaptability of their protocol within that marketplace, not the physical borders they belong to.  For example, bitcoin may not be a great fit for the advertising space due to latency issues but has great potential in areas like legal proceedings.  However, an altcoin may come to light that has significantly lower latency and is a natural fit for advertisement buys.  Whichever way it falls out, move over cash you’re losing your job to automation.

Security Implications:

We already talked about security implications when we discussed why blockchain is an outlier, so I won’t spend a ton of time on it.  I do want to reiterate that implicit design of the technology ‘makes it virtually impossible to add, remove, or change data without being detected by other users’ (Goldman Sachs).  I also don’t want to get into protocol structures, but there are ways as the owner of a block to shield certain data that may be sensitive or proprietary from other users.  For an in depth and technical discussion on this and all things blockchain check out the Coursera listed below.  Do note that the recordings in this Courseraare a bit out of date as they happened before bitcoin’s hard fork.

Use Cases:

Before we get into the idealized world in my head full of puppies and seamless operational capabilities, I want to touch on a few use cases.

  • The Republic of Georgia and their partnership with The Bitfury Group
  • P2P energy transfers in your driveway
  • The ridding of education admin fees
  1. In April of 2016 the Republic of Georgia announced a partnership with The Bitfury Group to digitize land ownership records. In February of 2017 the government determined the proof of concept had been successful and signed agreements with Bitfury to expand the services to most land related contracts negotiated or recorded through the government.  The only change that has been made to the citizens’ process is the ability to check who owns any given land title, dispelling confusion around double selling and vacant property.
  2. This month (August 2017) the company eMotorWerks launched a blockchain beta test driven, literally, by the daily need for power. A hindrance to the adoption of electric vehicles is the short distances they can go before needing to be charged again.  Particularly in areas where there are few public chargers with long distances between them, this can be a significant deterrent when choosing electric or gas.  Val Miftakhov, CEO of eMotorWerks, may have found a solution in EV dense areas.  Interested users can sign up to host their own personal electric vehicle chargers in their driveway to nearby drivers looking to charge while they shop or grab lunch.  Transactions are made through the Ethereum blockchain, an alt coin that holds market cap second to bitcoin. While the Share & Charge platform is in beta and the specific need will diminish as EV battery range increases, the test will show how we react to secure P2P transactions on a daily basis in blockchain.
  3. Sony teamed up with IBM and Hyperledger Fabric 1.0 to develop a blockchain for education. Specifically, the software stores and manages educational products such as diplomas, degrees, and tests to create a ‘digital transcript’.  This can be used to ease communication between schools if a student should transfer, will prevent fraud, and ensure security while still providing pertinent access to interviewers and advanced education institutions a student may be applying for.  The system can be integrated to absorb historical data from other providers and has the potential in the short term to cut down on administrative costs.  Long term I expect to see widespread adoption of this or other similar applications in the education space that allow a student to maintain an educational fingerprint that follows them throughout their educational career.  This could not only help with transfer of students from grade to grade or institution to institution but pinpoint through analytics when a student may be falling off track and how best to help them.

As I mentioned before, cryptocurrencies are the infrastructure that allows for any number of applications to be built on blockchain.  But quid pro quo transactions will not be the only way the technology is used, as seen in the example of education.  Users will vary vastly across industries and types, and will rock the very bedrock of our socioeconomic system similar to the invention of the microchip; slowly at first but with exponentially growing magnitude.

My Idealized World:

With that, we’re coming to a close.  I want to leave you with a few personal opinions about the future of blockchain and what it means for a person’s everyday life.  Long term blockchain could change the entire fabric of our society.  It will be gradual, if it even happens, but because of the key differentiator of decentralization blockchain has the potential to turn on its head everything from how you buy groceries to how land masses are run.  A decentralized, consensus driven state is not built in the physical world, though it may extend there.  Economics will instead be built on the coin you use, the industry and value-driven communities you exist in.  Blockchain has the potential to mean borderless societies, the end of financial institutions, and an acceleration towards zero marginal cost.  This technology can support great change.  But as humans we are the ones who determine what society looks like, so our decisions will ultimately be the drivers of that change.

Thanks for listening today, this has been episode 2 of Three Deviations Out.  I hope you enjoyed it, please leave any questions or comments below.  Next week we will be talking about quantum computing, communication, and teleportation and how what Albert Einstein described as ‘those spooky actions’ of electrons may enhance scientific fields and daily interactions.

All the Best,

Amanda

 

References:

 

Episode 1: The Manifesto

Contact Us

Subscribe in a reader

Transcript of Audio:

Hello everyone.  My name is Amanda.  This is Three Deviations Out.  The term comes from statistics, where it is used to track outliers and generally discard them to create a more normalized sample set.  Normalized sample sets drive good, economic decision making.  Here I plan to explore those outliers, what makes them so fascinating, and what their impact on the world is.  Because while the majority drive global trends, the outliers cause revolutions.

On Three Deviations Out you will mostly find technology, because like it or not it is ingrained in nearly everything we humans do these days.  The focus will sometimes be on the tech and sometimes be on how the tech is allowing for socioeconomic change.  However, as the proprietor of this establishment I I maintain full rights to talk about absolutely anything I want.  My mom says I’m fairly cool and interesting so hopefully I don’t disappoint.  If I do, feel free to let me know in the comments below.  I am also open to suggestions for topics you see as outliers, and may or may not end up exploring them here.  I hope my lack of commitment doesn’t drive you away.

Most of these posts are going to be podcast style with transcripts, as is here, but I will also be posting infographics, data graphs, and videos of my dog explaining quantum mechanics.

On a last note, my opinions are my own and should not be taken as financial, technological, or any other kind of advice.  I am a specialist of nothing.

Next week we will be talking about blockchain technology: how it came about, what the tech behind it is, and how it can impact our institutions.

All the Best,

Amanda