The New Careers of 2030

This thought piece was written in collaboration with my peers Ryan MorganIvy Nguyen, and Tiffine Wang. An abridged version is available on ReadWrite


With AIs answering our emails and robots increasingly replacing us on manufacturing assembly lines, mass unemployment due to widespread automation seems imminent. It is easy to forget amid our growing unease that these systems are not all knowing and competent.  As many of us have observed in our interactions with AIs, these systems perform repetitive, narrowly defined tasks very well but are quickly stymied when asked to go off script—often to great comical effect.  As technological advances eliminate historic roles, previously unimaginable types of jobs will arise in the new economic reality. We combine these two ideas to map out some new jobs that might arise in the highly automated economy of 2030.

Training, Supervising, and Assisting Robots

As the tasks robots and AIs take on become increasingly complex, more humans will be needed to teach these robots how to correctively accomplish these jobs. Human Intelligence Task (HIT) marketplaces such as MTurk and Crowdflower already use humans to train AIs to recognize objects in images or videos. New AI companies are expanding HIT with specialized workers to train AI for complex tasks. Lola is one such company using professional travel agents to train its AI.

Microsoft’s Tay bot, which quickly devolved into tweeting offensive and obscene comments after interacting with users on the internet, caused significant embarrassment to its creators but was ultimately harmless. Given how quickly Tay went off the rails, however, it is easy to imagine how dangerous a bot insured with maintaining our physical safety can become if it is fed the wrong sets of information or learns the wrong things from a poorly designed training set. Because the real world is ever-changing, AIs must be continuously training even after they achieve workable domain expertise, making expert human supervision critical to ensure that the AI remains correctly tuned for its intended function instead of evolving incorrect models that will impede its performance.

Integrating humans into the design of a semi-autonomous system has enabled some companies to achieve greater performance despite current technological limitations. BestMile, a driverless vehicle deployed to transport luggage at airports, is one such company that has successfully integrated human supervision into its design. Instead of engineering for every edge case in the complex and dangerous environment of an airport tarmac, the BestMile vehicle stops when it senses a obstacle in its path and waits for its human controller to decide what to do, enabling the company to enter the market much more quickly than its competitors who must continue to refine their sensing algorithms to allow their robots to independently operate without incident.

Frontier Explorers: Outward and Upward

Throughout history, people have emigrated between countries seeking better work opportunities. Examples include the German emigration to the United States in the 1800s and the ongoing emigration out of southern europe seeking better jobs. Since automation’s impact will be felt worldwide, the next big wave of emigration could be upward instead of merely out, leading to increased space exploration and settlement.  

We already see that humans are ready and willing to emigrate to the final frontier in droves. When Mars One, a Dutch startup whose goal is to send people to Mars, called for four volunteers to man their first Mars mission, more than 200,000 people applied. Furthermore, regardless of whether automation leads to increased poverty, automation’s threat of removing people from their current jobs and in essence some part of their sense of self worth could drive many to turn to exploration of our final frontiers. An old adage jokes that there are more astronauts from Ohio than any other state not because of Ohio’s great educational system but because there is something about the state that makes people want to leave this planet.

Automation also has the potential to increase the “pull” that might lead to expanded space exploration.  By removing or reducing the cost of human labor, automation will lead to an increased demand for certain price-elastic goods. At some point, earth will begin running out of certain materials, many of which are abundant in space. Asteroid mining, while obscure, is not a new concept and is currently being pursued by startups including Planetary Resources and Deep Space Industries, even before increased demand for materials caused by automation. If the value of resources in space climbs sufficiently high, we may be able to witness a new “gold rush” in space, causing further upward human migration.

One risk to human involvement in exploration is that exploration itself is also already being automated. Relatively few of our space exploration missions have been manned. Humans have never left earth orbit; all our exploration of other planets and the outer solar systems has been through unmanned probes. Even on earth, companies are finding it easier to explore remote areas using semi autonomous robots. Companies such as Liquid Robotics, which was acquired by Boeing in December 2016, are exploring the sea through unmanned ocean gliders.

Artificial Personality Designers

As AIs creep into our world, we’ll start building more intimate relationships with them and the technologies will need to get to know us better. Just as an effective sales associate or waiter knows that different clients prefer different interactions, a single AI personality will not suit every user. Moreover, different brands may want to be represented by distinct and well-defined personalities. The effective human-facing AI designer will therefore need to be mindful of subtle differences that make those interactions enjoyable and productive. This is where the Personality Designer or Personality Scientist comes in.

While Siri can tell a joke or two, but humans crave for more and naturally we will have to train our “things” to provide for our emotional needs. In order to create a stellar user experience, Personality Designers or Scientists are needed to research and build meaningful frameworks with which to design AI personalities. These people will be responsible for studying and preserving brand and culture, then injecting that information meaningfully into the things we love such as our cars, media, electronics, fashion, and more—anything that AI might touch.

A more intermediate and blunt solution that chatbot builders are using is hiring playwrights and poets to write lines of dialogues and outright scripts to inject personality into their bots. Cortana, Microsoft’s chatbot, employs an editorial team of 22. Creative agencies specializing in writing these scripts have also emerged in the last year.

Startups such as Affectiva and Beyond Verbal are building technologies that assist with recognizing and analyzing emotions, enabling AIs to react and adjust their interactions with us to make the experience more enjoyable or efficient. A team from MIT and Boston University is teaching robots to read human brain signals to determine when they have committed a fault without active human correction and monitoring. Google has also recently filed patents for robot personalities and has designed a system to store and distribute personalities to robots.

Human as a Service

As automated systems become better at doing most jobs humans perform today, the jobs that remain monopolized by humans will be defined by one important characteristic: the fact that a human is doing them. Of these jobs, social interaction is one area where humans may continue to desire specifically the intangible, instinctive difference that only interactions and friendships with other real humans provide.

We are already seeing profound shifts toward “human-centric” jobs in markets that have experienced significant automation. A recent Deloitte analysis of the British workforce over the last two decades found massive growth in “caring” jobs: the number of nursing assistants increased by 909% and careworkers by 168%. To extrapolate further, providing cuddling and other intimate, but non-sexual forms of human contact may become a common job in the future. The positive health effects of touch have been well documented and may provide valuable psychological boosts to users, patients, or clients. In San Francisco, companies today are offering professional cuddling services. Whereas today such services are stigmatized, “affection as a service” may one day be viewed on par with cognitive behavioral therapy or other treatments for mental health.

Likewise, friendship is a task that automated systems will not be able to fully fill. Certain activities that are generally combined with some level of social interaction (such as eating a meal) are already seeing a trend towards “paid friends.” For example, thousands of Internet viewers are already paying to watch mukbang, or live video streams of people eating meals, a trend that originated in Korea, in order to remedy the feeling of living alone. In the future, it is possible to imagine people whose entire job is to eat a meal and engage in polite conversation with clients.

More practical social jobs in an automated economy may include professional networkers. Just as people have not trusted online services fully, it is likely that people will not trust more advanced matching algorithms and may defer to professional human networkers who can properly arrange introductions to the right people to help us reach our goals. Despite the proliferation of startup investing platforms, for example, we continue to see startups and VC firms engage placement agents in order to successfully fundraise.

Looking forward

These jobs might seem outlandish today, but many high demand jobs such as app developers, social media managers and data analysts did not exist merely ten years ago. Despite what many startups claim, designing a fully autonomous system is incredibly complex and remains far out of reach. Training a human to help a robot with unexpected tasks or obstacles, or fulfilling the roles that require that intangible human touch will continue to be much cheaper than designing yet another robot to fill that role. While it is important to acknowledge that global upheaval caused by increasing automation continues to grow, it is equally vital to look ahead what more we can do with the time and resources, and therefore possibilities, that automation unlocks.


This thought piece was written in collaboration with my peers Ryan MorganIvy Nguyen, and Tiffine Wang. An abridged version is available on ReadWrite

What Happens After Moore’s Law?

For the past decade, technologists and reporters have been claiming the end of Moore’s law is just around the corner. In many cases, they cite a slowing in clock speed improvements, reliance on multi-core architecture for performance gains, and most recently Intel’s discontinuing of it’s Tik-Tok processor iteration cycle. This post will not discuss when Moore’s law will end, but rather, what are the implications to the electronics industry when it inevitably does?

The semiconductor industry is strange from an outside view. Semiconductor companies spend billions of dollars every few years to build new fabs (semiconductor factories) to achieve only marginal improvements over their competitors. Then, a few years later, write off the entire factory as it has become obsolete. As Moore’s law ends, these fabs will no longer need to be constantly replaced, only maintained, meaning that consumers will no longer have to bear the cost of building new fabs. Chip prices will decrease substantially across the board, and chips will no longer be differentiated by the process node, but by the size of the die. Chips on larger dies will be more powerful, and will likely be priced proportional to the amount of silicon they require to be manufactured. Manufacturers will likely begin to shift research from improving process node tech to improving yield and die size as the end of Moore’s law approaches.

The end of Moore’s law does not mean the end of performance improvements—rather, it may lead to an explosion in higher efficiency chips. Under Moore’s law, semiconductor startups had extreme difficulty competing with incumbents, simply because they did not have the resources to keep up with the latest process node technology; by the time a startup made a chip available to customers, the chip giants were already two process node generations ahead of them, and wiped out any efficiency improvements the startup may have had. As Moore’s law ends, these startups will be able to experiment with new architectures and chip designs without fear of immediate obsolescence. A startup with a substantially better CPU design may even be able to disrupt the dominance of x86 in modern computing if it gains enough developer support.

However, I believe that the end of Moore’s law will be most notably marked by a huge increase in the variety of application-specific integrated circuits (ASICs). The CPU today is a jack of all trades but master of none, but as CPU performance flattens out with the end of Moore’s law, designers may delegate specific CPU functions to ASICs to improve total system performance. This is already true of graphics cards, which run the graphical and physics elements of games on specially designed chips. CPUs have also begun to add ASICs: Intel chips have special circuitry designated specifically for encoding and decoding h.264 video, and for compressing and decompressing documents; Snapdragon SoCs have special circuits for vision processing and ambient keyword listening. As Moore’s law comes to an end, I expect CPUs to increasingly use ASICs in their design for basic functions, but also expect modular ASICs to become common in PC and server design. Just as PC gamers added GPUs to their systems to increase game performance, Google added TPUs to their servers to accelerate deep learning workloads, and has now even designed specialized dedicated deep learning servers for training their networks. I anticipate that processor differentiation will continue to increase (in part stoked by a growing number of semiconductor startups), and that PCs will no longer be as homogenous as they are today. I expect that in the future, laptops designed for artists, gamers, programmers, writers, and businessmen will have greatly different architectures and designs.

An end to Moore’s law may also lead to longer product lifecycles, potentially changing major aspects of the product’s design. For example, today most smartphones are replaced every 2-3 years. Because of this, many aspects of the device, including the battery, are not designed to last over 1000 cycles. If the processing power of devices does not make them quickly obsolete, then designers may need to build more rugged and modular electronics, so that they last many years, perhaps even decades. For example, despite a shift towards integrated rechargeable batteries in smartphones and laptops in recent years, in a post-Moore’s law era, users may demand the ability to replace their batteries as they age. With a massive shift in the weakest link in the device’s lifetime, many components will need to be re-engineered, especially those that include moving parts. Further, aesthetic changes may need to be made: users may expect their devices to look more appealing if they are planning on sticking with them for decades.

The pressure on innovation in hardware should not be the biggest concern to consumers in a post-Moore’s law age; rather, consumers should be concerned about Moore’s law’s software corollary: Wirth’s law (sometimes known as Gates’ Law, May’s Law, or Paige’s law), which states that the speed of software halves every 18 months. This trend is caused by a combination of feature bloat and developer laziness, which is why many tasks (such as browsing basic websites, and word processing) are just as slow, and sometimes slower, than they were decades ago. If this trend continues, it is quite possible that our electronics will effectively die from bad software, but new electronics will not fare any better to replace them. My only hope is that programmers of the future somehow change their habits, and focus on code efficiency, rather than the speed at which they can produce code.

Unexpectedly Easy and Surprisingly Difficult Use Cases for Automation

This thought piece was written in collaboration with my peers Ryan Morgan, Ivy Nguyen, and Tiffine Wang. An abridged version is available on VentureBeat

Many have dreamed of robots taking over monotonous functions of our daily lives, freeing us humans to work on highly-skilled or creative tasks. Today, robots are increasingly making their way into our homes, offices, and public spaces, but this integration is not occurring in the way that most futurists and popular science fiction writers have anticipated. Here’s our top picks for tasks that were surprisingly easy to automate, and tasks that remain surprisingly hard.  


Tasks Surprisingly Easy to Automate


Prevailing wisdom claimed that computer systems would be incapable of creating art or music, because they lack the natural creativity of humans. However, automated systems have been surprisingly successful at generating works of art. Since the early 2000s, the University of London’s The Painting Fool program has been generating artwork, much of which has been featured in prominent galleries alongside human-created art. Google’s DeepDream software uses a recursive convolutional neural network to iteratively adjust images to morph them into objects in its database (usually dogs, buildings, and eyeballs). Other neural networks such as DeepStyle or Prisma use convolutional neural networks to stylize photos after the work of a specific artist. Google is now experimenting with automated curation of art exhibits through machine understanding of the style and topic of artists’ work. Logo generation systems like Withoomph, Tailor Brands, and Logojoy, use partially- or fully-automated systems to generate logos for a business based on keywords.

Researchers have also demonstrated systems that can generate music: Stephen Wolfram has made an automated music generator available to the public since 2005, the University of Malaga has created Melomics, a system that autonomously composes and plays music to match your lifestyle and activities, and IBM has partnered with artists to help compose music with Watson by combining massive musical datasets and their lyrics with sentiment analysis.  Although these automated art and music systems are arguably derivative on the work of others, they are undeniably interesting and beautiful in their own right. Many scholars may argue that, like artificial art, all human art is derivative, although that is a discussion best left to philosophers. It appears that natural human creativity is not, in fact, necessary to create beauty.


Historically, scientific research has been conducted by highly educated researchers and lab assistants, and as such was not an obvious place for a blossoming of robotics applications.  However, a core basis of science is reproducibility, and many aspects of science and research are extremely repetitive, requiring neither continuous thought nor the high level of education attained by the individuals who perform those tasks today.  

Companies such as OpenTrons are working to save scientists both time and money by helping them automate pipetting, a precise but monotonous and labor intensive task used in many biological and chemical laboratories. Startups Arcturus and BioRealize enable scientists to remotely run many genetic engineering experiments in parallel, greatly reducing mistakes and lab time. Other startups, such as Emerald Therapeutics and Transcriptic, are hoping to move research to the cloud, using remote robotic systems to perform the experiment itself. In addition to vastly increased lab efficiency, these systems could greatly improve reproducibility, as researchers need only share their experiment’s source code.

Not content with automating manual lab work, machines are now automating scientific discovery and understanding. Scientists at Cornell’s AI lab created a program called Eureqa, that uses an evolutionary approach to model creation and is able to create models on data without being given any prior assumptions. The team has since commercialized the technology through a spinout, Nutonian. Researchers at Cambridge and Aberystwyth have created a similar autonomous science algorithm called Adam, which they claim was the first machine to independently discover new scientific knowledge. That same team moved on to the University of Manchester, where they created Eve, which is a robot designed to make the drug discovery process faster and cheaper. As the pace of scientific research increases, scientists may rely more and more on automated systems.


The practice of law requires many years of studying and understanding laws, cases, and other legal precedence. Law firms often employ large teams of paralegals and interns to perform the majority of initial research; current AI technologies can augment and eventually automate the bulk of that research, leaving the more difficult aspects of practicing law such as synthesizing the information and advising to trained legal professionals.

To date, case law, contract law, and advocacy law have seen the most automation. Startup DoNotPay helps users appeal traffic tickets; the bot started in the UK and has expanded to cover several US cities. Over a quarter million tickets have been successfully challenged, and several cities have banned the bot. ROSS Intelligence augments legal research by using AI to surface relevant legal passages and cases to increase the efficiency and quality of legal research. eBrevia uses AI to extract data from contracts to accelerate diligence, contract analysis, and other related applications. Legalist automates the qualification process for litigation financing and backs cases they identify as having a high success rate.

There is a ceiling, imposed by liability concerns, to how much law can be automated; a startup might expose itself if it advised a client on legal matters, so most companies likely can only research and synthesize information. Because much of the legal industry charges by the hour, it will be interesting to see to what degree lawyers will accept automation and other tools to increase the efficiency of their practice.

Policing and Security

Unlike warfighting robots, which have clearance to hurt or even kill humans, robots that work as a police force or physical premises security, particularly in areas far from any war zone, are expected to more closely follow Asimov’s three laws of robotics: avoiding injury to humans while still keeping an area secure. For this reason, many would expect criminals to ignore robotic security guards, knowing full well that the robots could not harm them.

However, the primary function of human physical security teams is to observe and report security incidents; due to the risk of liability to their employers, many human security guards are not allowed to interfere, making their work surprisingly easy to automate. Security automation to date has focused on augmenting existing security forces, inexpensively amplifying the eyes and ears of security guards without increasing their numbers. Traditionally, fixed camera systems could perform this task, but constantly monitoring a large campus by camera is difficult. Knightscope and Gamma 2 Robotics build ground-based security robots that act as “force multipliers” to provide deterrence through physical presence; these robots are being deployed at an increasing number of corporate campuses and shopping malls. Another startup, Nightingale Security, is using flying drone robots to help their customers carry out continuous surveillance. In time, robots may replace substantial portions of police and security forces, requiring humans only for resolving violent incidents.


Tasks Shockingly Hard to Automate


Cleaning is perhaps one of the most obvious and logical applications for robots, yet with the exception of the occasional household Roomba, robotic cleaning is not widespread. Not only is cleaning an extremely repetitive task that requires very little technical skill, but it is also a task that almost everyone has to do in their personal lives and despises.  What makes the dearth of cleaning robots more perplexing is that the cleaning robot Roomba was one of the first commercially available robots to everyday consumers in 2002. Almost fifteen years later, there has not been any real innovation in terms of cleaning robots that has seen commercial success. While Roomba, which is made by iRobot is the best known of these robots, competing products are made by a healthy list of competitors, including Dyson and LG.

On the commercial side of the market, work on an automated janitor / cleaner has gone back at least as far as the 1980s, when Electrolux’s subsidiary, The Kent Company was working on a solution. Yet somehow, to this day, cleaning on a commercial scale is still done by hand. Several employees of Kent went on to create a company called Intellibot that creates a large scale automated floor cleaner called TASKI.  Another large scale floor cleaner, the RS26, comes from a collaboration of Brain Corp and International Cleaning Equipment.  

With that said, there are some very interesting applications on the horizon outside of floor cleaning robots. The startup Ecoppia is using robots to clean solar panels, and Kurion has a division that makes robots that are used to clean up nuclear facilities, including Fukushima.

Clothing and Textiles

It is surprising that textile manufacturing, one of the first industries to flourish in the industrial revolution through automation remains an incredibly hard to automate completely. Robots work best when manipulating solid objects. Textiles, however, shear, stretch, and compress, making them difficult for robots to manipulate, even though such manipulation comes naturally to humans. Machines today can dye and cut fabrics into the right shape for assembly and embroider intricate designs in an instant, but nimble human fingers are still required for assembling that fabric, sewing on buttons, pockets, and adding on delicate finishing touches such as lace and pleats.

Even today, few companies have made any headway in sewing. Softwear Automation is one such company, which uses high speed cameras to instantaneously map the position and orientation of a textile as it flexes and bends, allowing the robot to make micro-adjustments as it sews pieces of fabric together. Nike is using automation to circumvent the traditional shoe assembly process altogether with its FLYKNIT line of shoes, which uses robotics to custom-knit the top portion of a shoe out of one continuous thread, doing away with the need to assemble the shoe from multiple separate pieces of material. Alternative textile and clothing manufacturing techniques such as 3D printing are been heralded as poised to revolutionize the fashion industry, but the items created by these processes and new materials remain niche outside of the athletic wear market.

Automation has struggled with clothing care almost as much as it has with sewing. Although machines can wash and dry clothing (sometimes in the same machine), the process of loading, unloading, and folding remains repetitive and laborious with few reasonable automated alternatives. Startups Laundroid and Foldimate have developed laundry folding robots, although their solutions are large desktop or closet-sized machines with limited capacity. Robotic clothes care systems similar to those imagined in science fiction are likely many years away.


Automating the harvesting of crops that are today picked by hand has so far been hard because many of these crops can be damaged easily and computers have had trouble with visual recognition of the fruit or produce they are trying to pick.  Despite these challenges, there are many companies working towards fully automating a variety of agricultural tasks.  Harvest Automation has created a robot called the HV-100 that can do a series of tasks in nurseries and greenhouses. According to one of their customers at McCorkle Nurseries, “We have many plants in the field that have been growing for as many as 24-months; they’ve been spaced, collected and spaced again and they haven’t been touched by a person since the day they were potted.”

In the fruit picking space, companies such as FFRobotics, Abundant Robotics, and Vision Robotics have worked on solutions for fruit picking. While none of their products are widespread yet, their efforts are helping pave the way towards a more automated world.  Abundant Robotics, for instance, is avoiding many of the difficulties of robotic grippers and instead using a suction device to pick fruit.   

There are also a series of startups working on weeding and thinning robots.  Among these are Naio Technologies, which has created a robot called Oz for small scale weeding.  It is also working on a robot called Dino for larger scale weeding, and a robot named Ted for vineyard weeding.  

It should also be noted that many of the the traditional agricultural players are also working towards automating the farm. John Deere has offered some form of kit for autonomy for years, the current version being AutoTrac. While not in farming, Deere also offers a robot called the Tango E5, which is essentially a Roomba for mowing a lawn.  Other competitors in the autonomous tractor space include AGCO and Kinze. AGCO, through its subsidiary Kendt, is working on a project they call MARS, or Mobile Agricultural Robot Swarms. These swarms of small farming robots will be dragged into fields in logistics units (which also hold seed supply as well as batteries for the robots), and then split up to do their work.  


Looking Ahead

Several themes emerge from this overview that enable us to predict what’s next for the robotics industry. Today, robotics have made significant headway into automating repetitive tasks requiring little finesse and tolerant of blunt solutions, like washing laundry and dishes or creating large quantities of textiles. Tasks that require processing large amounts of data and information can also be augmented by automation. On the other hand, tasks that require complicated manipulation of materials are still difficult to automate and need more skilled assistance, as does the deeper decision making and synthesis of information that robotics help us surface. As much as robotics have transformed many major industries, the need for humans to complete the last finishing steps, as diminishing as they may be, makes it abundantly clear that total robotic world domination remains out of reach.


This thought piece was written in collaboration with my peers Ryan Morgan, Ivy Nguyen, and Tiffine Wang. An abridged version is available on VentureBeat

A Daydream: On Magic Leap’s Recruitment Pamphlets

In April 2016, I attended Games Developer Conference to take a look at recent developments in virtual reality technology. The famously mysterious company Magic Leap had a booth among the game studios, and was handing out recruitment pamphlets.


A funny picture, I thought, and laughed at the ridiculousness of a balloon on the lunar surface. Because there is  no atmosphere, and therefore no buoyancy force, the balloon would simply fall to the ground. When discussing the photo with a friend of mine, she smartly pointed out that the moon does in fact have an atmosphere, albeit a very thin one. Sensing an interesting problem, I asked myself: Could a balloon actually float in the thin lunar atmosphere?

As with any physics problem, the first step is to choose an approach. Bouyancy seemed the logical strategy, so I started with the simple equation

CodeCogsEqn (11)

That represents the equilibrium condition for floating, where the mass of the gas in the balloon and the mass of the balloon skin equal the mass of displaced atmosphere. In order to create an ideal situation, I chose to ignore the string in the picture, as it is just dead weight.
The next step was to calculate the masses of the components. For simplicity (and the best volume to surface-area ratio) I chose to approximate the balloon as a sphere, which gives us the following equations based in spherical surface area and volume formulas:

CodeCogsEqn (2)CodeCogsEqn (1)CodeCogsEqn

Note that density for foil is mass per square area, but the density of the gas and atmosphere is mass per cubic area.

Plugging back into the first equation,

CodeCogsEqn (11)

We come up with the following:

CodeCogsEqn (3)

Thankfully, a lot of these values cancel out (isn’t it great when physics works out like that?), granting us a much simpler equation:

CodeCogsEqn (4)

It is worth noting that the balance point is linearly dependent on the radius of the balloon, which matches my intuition (A larger balloon can carry more weight than a smaller one).

Solving for the radius, we get:

CodeCogsEqn (5)

Now it is time to start plugging in values. NASA provides the composition of the lunar atmosphere, captured by experiments left by the Apollo missions. If you would like to experiment with different values, I’ve made an excel document available here. Using this data and some basic calculation, we calculate the lunar atmosphere to have a particle density of

CodeCogsEqn (10)

And a mass density of

CodeCogsEqn (8)

We want to use the most favorable conditions possible for this thought  experiment, so we use the lightest gas (molecular Hydrogen) and the same particle density of the atmosphere (no need to over-inflate the balloon) to get:

CodeCogsEqn (9)

This is about one tenth the weight of the atmosphere. Note that we make these calculations assuming the gasses behave as ideal gasses, and that the temperature of gasses inside the balloon is the same as the atmosphere.

To further improve the favorability of this thought experiment, we want to use the thinnest, lightest substance known as the skin of the balloon, in this case graphene, which has a density of:

CodeCogsEqn (7)

Plugging in these values grants us a balloon radius of:

CodeCogsEqn (6)

This is a very large value, almost twice the distance of the earth to the moon, so clearly the balloon in the picture  is impossible. At such scales, variations in the moon’s gravity with distance take play, the mass of the hydrogen gas may begin to measurably pull in on itself, and the earth’s gravity becomes important as well.

Furthermore, there are other engineering problems that need to be addressed: graphene may not have the structural integrity to contain that magnitude of gas (or even its own weight) and has never been manufactured in sizes of more than a few square centimeters. Transporting and assembling the delicate atom-thick structure in space would prove an immense challenge. Lastly, graphene is permeable to hydrogen gas, meaning that the entire balloon would leak, and likely rather quickly.

It seems that the most implausible and technically challenging achievement in this photograph is not the man sent to the moon, as was clearly the photographer’s intention, but the balloon, which could only float if the moon was terraformed to have a much thicker atmosphere.



The Future of Offline Commerce

Over the past two decades, Amazon had a fantastic rise to become an online consumer packaged goods retail superpower (Alibaba has had a similarly fantastic growth rate, but is out of the scope of this post). Pressure from Amazon is forcing traditionally offline retailers to not only to offer online storefronts, but also match prices or hold flash sales and offer fast shipping to compete.

At first glance, Amazon’s scale may make it seem an impossible commerce adversary, able to undercut its retail competitors through economies of scale. However, for many markets, particularly apparel, there is a large consumer preference to “try before you buy”. Likewise, consumer electronics, particularly those that can be difficult to set up, prone to errors, or feature complex interfaces (such as IoT, smartphones, laptops, etc.) may gain a competitive advantage by offering in-person support at retail chains. Apple was a pioneer in this area, and since that time, others have followed the same strategy.

Fast same- or next-day delivery is also rapidly changing consumer expectations of online shopping, and may present an opportunity for brick-and-mortar retailers to gain a competitive advantage over their online counterparts. Amazon Prime, which pioneered same- and next-day delivery, is not nearly as well equipped to enable that level of distribution as existing brick-and-mortar outlets. Strategic distribution of outlet stores near population centers, paired with staff on-hand and fast local shipping options from Google Shopping Express, UberRUSH, and TaskRabbit, enable these stores to create a shipping and distribution system that rivals Amazon’s.

So how will retailers respond? I believe that storefronts will slowly morph into showrooms, allowing users to test or get support on products, but that purchases will be performed primarily online. Floor space in stores usually reserved for duplicates of stock on display may be re-purposed as miniature warehouses or distribution centers, enabling rapid same-day shipping of products to customers. Many users may prefer to have their new products shipped directly to their home, rather than carrying them out of the store. One fashion company, Bonobos, already does not allow customers to take the clothing they try in-store with them, requiring the order to be shipped online instead.

The showroom model allows companies to leverage their existing storefronts as a same-day distribution network, while simultaneously preserving the customer benefits of in-person support, fitting, and customer browsing that Amazon has been unable to emulate, at least for now. Amazon’s plans to open physical storefronts show an interest in this model, and I believe that major brick and mortar retailers will act quickly to update their strategy towards a showroom model in the next few years.

Slavery without Slaves

Borrowed from Scientific Computing

Massive automation of the economy through intelligent and versatile machines may lead to a variety of different social outcomes. However, as even more intelligent and capable machines represent an inexpensive, zero-wage form of labor owned as property, the economics of historical slavery may provide an insightful glimpse of how an automated economy may form and evolve.

Slavery is a sensitive topic to discuss, but if we consider machines, not people, to be our slaves, then being called a slave owner seems derisive, but just a technicality.  However, such ownership is not as guilt-free as it may appear, since we cannot ignore that in order to provide inexpensive machines for mass use, inexpensive human labor is necessary, often obtained in countries that do not abide by our same laws of fair employment. When an economy is sufficiently automated that human labor is not necessary to manufacture machines, these concerns may be suitably addressed.

For the vast majority of human history, and well into pre-history, humans have owned each other as slaves. Ancient Mesopotamian tablets dating back as early as 1750 BC reference distinctions between free men and slaves. Slaves played an important role in the economies of ancient and new world civilizations, especially as high real wages and low slave costs made slaves the most economical factor of production. However, slave-ownership had drastically different effects on the culture and society of historical civilizations. Will our future with slavery of intelligent machines more closely resemble ancient Sparta or ancient Athens?

In ancient Sparta, citizens were required to become soldiers, in part because it was considered the most honorable profession, but also because a sizable standing army was necessary to keep the large slave population of state-owned slaves, called Helots, from revolting. Populations shifted over time, but in some years Helots outnumbered Spartans seven to one, requiring Spartans to use brutal subjugation techniques such as mandatory beatings and random killings to pacify the Helot populace. The common science fiction theme of a robot revolt is consistent with this concern about maintaining power over the workers.

Athens also had a large slave population, but the relationship between personally owned slaves and masters in Athens was very different from Sparta’s. Like Sparta, Athens had as many as 80,000 slaves in the 5th and 6th centuries BC, averaging three to four slaves per household. According to Greek literature, even poor peasants could afford slaves. However, Athenians, without fear of revolt as in Sparta, were able to use their freedom from labor to pursue scientific, cultural, and political goals. Because slave-owning citizens no longer needed to work to provide for themselves, they spent their days creating beautiful works of art, debating political theory and devising new forms of governance (such as democracy) in the forum, and developing the basis of modern science and mathematics.

Will widespread adoption of intelligent and capable machines create a system closer to the slave economy of Athens or Sparta? Automated systems, so long as they do not pose the threat of overthrowing their masters, would not require an approach like Sparta. Most conditions for intelligent machines suggest an approach like Athens. Free economies around the world focus more on individual than state ownership of goods, similar to Athens. Automated systems may also grow to be very inexpensive, especially since power for a machine costs much less than supplying food and shelter for a human. Within a few years of introduction, prices for home robotic systems will likely be low enough to be affordable even for low-income households. Widespread freeing of human time will hopefully follow the Athenian model: where people use their new freedom from labor to explore humanities, mathematics, and the sciences. Otherwise, it would be unfortunate if it follows the Spartan model, involving military expansion and desire for foreign conquests.

Although new technologies virtually always raise the standard of living for all, adoption of automation technologies are likely to be highly capital intensive initially. Some economists believe that high capital costs may accelerate wealth inequality, and potentially cause social tensions and reforms, as discussed in an earlier post. Slave ownership in antiquity does provide some justification for this concern: In ancient Rome, large parcels of land known as Latifundium were powered by armies of slaves owned by the wealthy. Latifundium were in effect, the first form of industrialized agriculture, generally specializing in grain, olive oil, or wine, and were necessary to support Rome’s growing mega-cities. Economies of scale allowed these Latifundium to grow rapidly and consolidate smaller farms, and in turn, consolidate regional power and wealth to a small elite. Pliny the Elder noted that at one point, half of the province of Africa was controlled by a mere six land owners.

Intelligent machines and mass-automation may free humans from work, and may further free us from exploiting one another. Free time may create a new renaissance of art, philosophy, and science, fueled by greater numbers of ambitious people with no need to spend time focusing on sustenance. However, distribution of land, power, and wealth remains uncertain. Could we enjoy the economic benefits of slavery, but without the horrors of mistreatment, subjugation, torture, and hard labor–everyone a slave owner, and no person a slave? Will inexpensive intelligent machines create a world of luxury for everyone, or consolidate wealth and power among very few? How can we ensure that our future relationship with intelligent machines will be more like Athens and less like Sparta?