Issue Number 283 June 15, 2020

This free Information Age Education Newsletter is edited by Dave Moursund and produced by Ken Loge. The newsletter is one component of the Information Age Education (IAE) and Advancement of Globally Appropriate Technology and Education (AGATE) publications.

All back issues of the newsletter and subscription information are available online. A number of the newsletters are available in Spanish on the AGATE website mentioned above. Dave Moursund’s book, The Fourth R (Second Edition), is now available in both English and Spanish (Moursund, 2018a, link; Moursund, 2018b, link). The unifying theme of the book is that the 4th R of Reasoning/Computational Thinking is fundamental to empowering today’s students and their teachers throughout the K-12 curriculum. The first edition was published in December, 2016, and the second edition in August, 2018. The Spanish translation of the second edition, La Cuarta R, was published in September, 2018. These three editions of The Fourth R have now had a combined total of more than 101,500 page-views and downloads, and more than 22,500 of these are the Spanish edition.

I am currently writing a book tentatively titled ICTing and Mathing Across the History Curriculum. Four earlier IAE Newsletters contain substantial content of this work in progress book. See IAE Newsletter - Issue 254 - March 31, 2019; IAE Newsletter - Issue 255 - April 15, 2019; IAE Newsletter - Issue 256 - April 30, 2019; and IAE Newsletter - Issue 257 - May 15, 2019. This current newsletter is the eighth in a series that will be parts of the book and began with

Introduction to ICTing and Mathing
Across the History Curriculum. Part 11

David Moursund
Professor Emeritus, College of Education
University of Oregon

“Never doubt that a small group of thoughtful committed citizens can change the world; indeed, it's the only thing that ever has.” (Margaret Mead; American cultural anthropologist; 1901-1978.)

“Each problem that I solved became a rule which served afterwards to solve other problems.” (René Descartes; French philosopher, mathematician, scientist, and writer; 1596-1650.)

“The computer was born to solve problems that did not exist before.” (Bill Gates; American business magnate, software developer, investor, and philanthropist, and cofounder of Microsoft Corporation, 1955-).

The first of the three quotations given above is one of my favorites. We live in a world that is changing. In some cases, a single individual or a small group makes a world changing discovery or develops a world changing insight. A small group of scholars developed reading and writing about 5,500 years ago. That has proven to be a world changer.

The second and third quotations focus on problem solving. Humans are very good at identifying problems that they want to solve—or that they think someone else should solve. In many disciplines of study, problem solvers can build on the previous problem-solving work of themselves and others. This is a very important idea. The computer is an aid to solving both old and new problems.

I have made considerable use of the Web in writing this newsletter, and I also drew on my own math and science background. The sources of information used in doing historical research can be divided into two general categories (University of Washington, Tacoma, n.d., link):

Primary sources include documents or artifacts created by a witness to or participant in an event. They can be firsthand testimony or evidence created during the time period that you are studying. Primary sources may include diaries, letters, interviews, oral histories, photographs, newspaper articles, government documents, poems, novels, plays, and music. The collection and analysis of primary sources is central to historical research.

Secondary sources analyze a scholarly question and often use primary sources as evidence. Secondary sources include books and articles about a topic. They may include lists of sources, i.e. bibliographies, that may lead you to other primary or secondary sources.

The Web is a collection of both primary sources and secondary sources. The primary source articles often are sufficiently technical that they are mainly readable and understandable by specialists in the topic being presented. The secondary sources include almost all of the content published in the popular, widely read media. The newsletter you are now reading falls into this general category. I believe it contains some original thinking, but it does not contain any original research that I have conducted. As students learn to read in school, there is considerable emphasis on learning to read across the curriculum. Nowadays, this emphasis is being broadened to include Webbing across the curriculum.

If you are a history teacher, you may appreciate my observation that most of the content of the Web is largely history, in that it consists mainly of reports on what already has happened. However, in addition to this history, the Web includes weather forecasts, ocean tide tables, the time of upcoming eclipses of our sun and moon, and so on, based on scientific predictions of the future. It also contains computer programs that can solve a wide variety of science, technology, engineering, and math problems. The Web is a global change agent, and is certainly a powerful change agent in our schools. [See the next newsletter in this current series.]

An Increasing Pace of Change

The history of prehumans and early humans was, to a great extent, uneventful. The small changes occurring in their everyday lives did not leave a record of world changing events. We have evidence of the invention of rock knapping about 3.3 million years ago and of controlled fire perhaps a million years ago. These are examples of world changing events.

For many tens of thousands of years, people made slow progress in solving problems and accomplishing tasks that would improve the quality of their lives. This gradually began to speed up with the development of widespread agriculture starting about 11,000 years ago that led to an increasing population, and then the development of reading and writing. There was a gradual accumulation of knowledge and skills that the next generation of people could build upon and add to. We began to develop libraries to preserve this knowledge and to make it available to those who needed it (Andrews, 11/17/2016, link):

The world’s oldest known library was founded sometime in the 7th century B.C. for the “royal contemplation” of the Assyrian ruler Ashurbanipal. Located in Nineveh in modern day Iraq, the site included a trove of some 30,000 cuneiform tablets organized according to subject matter. Most of its titles were archival documents, religious incantations and scholarly texts, but it also housed several works of literature including the 4,000-year-old “Epic of Gilgamesh”.

Now, let’s jump forward more than 2,000 years to the time when underground coal mining had become important to people in Europe. The problem of dealing with water in underground mines eventually led to the development of the first steam engine just before 1700. But, it took another 65 years before a practical, useful steam engine was produced and the Industrial Revolution began in England. This was a world changing event, and the pace of change began to increase ever more rapidly.

A Few Examples of Technology-based Change Since 1800

Progress in technology brought huge changes to the world during the 1800s. Steam engines brought us ocean-going steamships. The development of the telegraph, telephone, and radio during those years greatly changed communication. Electricity brought us electric lights and electric motors.

A partially automated loom was developed in 1804–05 by Joseph-Marie Jacquard of France (Sack, 7/7/2019, link). Jacquard’s loom utilized interchangeable punched cards that controlled the weaving of the cloth so that any desired pattern could be obtained automatically. This was a type of labor-saving technology that changed an important part of manufacturing, and was an early stepping stone in the development of computers.

The noted English inventor Charles Babbage adapted punched cards as an input-output medium for his 1837 analytical engine, a non-electronic digital computer. Punched cards were used by the American statistician Herman Hollerith to store and process data from the U.S. Census of 1890 (Encyclopaedia Britannica, 2020, link). He and James Watson founded International Business Machines (IBM) in the early 1900s.

The 20th century brought ever more massive and long-lasting changes. Medicine as a science established a strong foothold. In 1928, Alexander Fleming discovered penicillin, the first true antibiotic. This was about the same time that television was being developed. By the late 1930s, electronic and vacuum tube technology had developed to a level that made it possible to begin to build an electronic digital computer. In 1947, the transistor was developed, and eventually this technology largely replaced vacuum tubes in computers. The first ocean-going container ships entered service in the 1950s.

In 1953, scientists James D. Watson and Francis H.C. Crick announced that they had determined the double-helix structure of DNA, the molecule containing human genes. A science of genetic engineering was emerging, and this has certainly become a world changer.

The first communications satellite was placed into orbit in 1962. We now have very powerful computers, communication satellites, fiber optics, the Internet and the Web, robots, and smartphones. I view these developments as world changers.

You just happen to be living at a time when this momentous wave of change is getting well started. You can witness it and participate in it right now. You can practice being a futurist, and make forecasts of how Information and Communication Technology (ICT) will change the world during the next century or millennium. Moreover, you and your students can become historians, examining how ICT has changed the world during the past 75 years. You and your students are likely to have relatives that are 75 years old or older who can share stories of the many ways that these developments have changed their lives.

The Information Age

You know about the Hunter-gatherer Era, the Agricultural Era, the First Industrial Revolution (steam power and locomotives), the Second Industrial Revolution (electricity, electric lights, and internal combustion engines), and the Information Age (also called the Third Industrial Revolution). In the United States, the Information Age began in 1956, when the number of white-collar workers first exceeded the number of blue-collar workers (Moursund, 2019, link):

Of course, at that time relatively few jobs had much to do with computers and computer-related technology. What was occurring was a steady trend away from people holding Industrial Age manufacturing jobs. An increasing number of people held jobs as clerks in stores, office workers, teachers, nurses, and etc.

We were shifting into a service economy. The mass media might have called the new era the Service Age. However, Information Age certainly sounds more hip.

So, here is a question for all teachers. For each of the subject areas that you teach, what have been some of the major historical change agents in these subject areas? More specifically, how is Information and Communication Technology (ICT) changing the content, pedagogy, and assessment of the disciplines that you teach? Or, to be still more specific, what differences are computers making? Since every discipline of study has a history, this newsletter may well be useful to all teachers.

My recent Google search on the expression history of computers produced about 881 million results. Needless to say, I have not read all of this literature in preparation for writing this newsletter! The remainder of this newsletter looks at a few examples of computer-based world changing events that are currently ongoing.


The early electronic digital computers were stand-alone machines. It wasn’t until 1969 that the Arpanet was first used to connect two computers (Shedden, 10/29/2014, link). Nowadays, we take such connectivity for granted. We tend to think of a computer as both an information processing device and a connectivity device, and we consider both when discussing the speed of a computer system.

Connectivity was a major issue long before we had electronic digital computers. The electrical telegraph was developed in the 1840s, and it required connectivity. Similarly, the telephone required connectivity. Both of these means of communication were thwarted by oceans.

However, there currently are more than 400 undersea cables (Routley, 8/24/2017, link). The history of undersea cables is quite interesting. Here are three videos on this topic that I enjoyed viewing: (Social Sciences and Humanities Research Council of Canada, 6/26/2017, link); (AT&T, 3/21/2011, link); (AT&T Archives, 5/15/2014, link). The third video is the story of a 1959 transatlantic cable for carrying telephone conversations. Three years earlier, the first such transatlantic cable for telephone was completed (Wikipedia, 2020f, link). That cable could (only) handle 36 simultaneous conversations.

The development of fiber optics then led to a truly amazing progress in connectivity (Wikipedia, 2020e, link):

An optical fiber is a flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data transfer rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer.

Contrast the achievement of the 1959 transatlantic cable carrying 36 telephone calls with today’s undersea fiber optic cables (Nguyen, 2/28/2019, link):

A new experiment has achieved a record fiber optic cable capacity of 26.2 terabits per second across more than 6,000 kilometers of the MAREA transatlantic fiber optic cable. [This 2019 cable is jointly owned by Microsoft and Facebook. Currently, the average transfer speed across the Marea cable is around 9.5 terabits per second.]

If you are making an overseas phone call or using cloud computing, there is a 99 percent chance an undersea fiber optic cable is being utilized. Now, new work with lasers shows promise for squeezing more data through these cables, to help meet the growing demand for data flow between computers in North America and Europe.

A terabit is a trillion bits. At a speed of 9.5 terabits per second, this is more than enough to accommodate a million viewers each watching a different DVR-quality movie. I find it hard to imagine a half-million simultaneous video phone conversations with DVR-quality video all occurring simultaneously over this one transatlantic fiber optic cable (Harris, 2/6/2020, link):

Globally, about 380 submarine cables carry more than 99.5 percent of all transoceanic data traffic. Every time you visit a foreign website or send an email abroad, you are using a fiberoptic cable on the seabed. Communication satellites, even large planned networks like SpaceX’s Starlink system [Elon Musk's private spaceflight company], cannot move data as quickly and cheaply as the underwater cables.

These communication satellites will play an increasingly important communications role, both now and in the future. Technologically, we now have the potential to be able to provide service via satellite-based high-speed communication to all of the world’s people. Elon Musk is a leader in such visionary projects (O'Callaghan, 4/21/2020, link):

The deployment of the Starlink satellites seems to have gone exactly to plan, which means SpaceX now has around 240 satellites in service for Starlink. Already, after the last batch went up in early January, SpaceX became the largest private satellite operator in the world, and now it’s just extending its lead.

There currently are plans for at least a half-dozen more launches this year. It is expected that a total of 1,440 of these communication satellites will be enough to provide high-quality global service.

The Speed of Computers

I happen to like dealing with large numbers, provided they are relevant to a topic that interests me. In the previous newsletter, I mentioned that the first commercially available computers could complete about a thousand arithmetic operations per second, and that the fastest computer in the world today is about a 200,000 billion times as fast. So, here are my questions. Who needs that speed—what is it good for?

Can you think of any type of frequently occurring problem that requires the use of very high-speed computers? I’ll give you a hint. It is quite likely that you frequently have encountered such a problem.

I encountered this type of problem when I wanted to know about the world’s fastest computer. I used Google to search on the expression speed of world’s fastest computer. The Google search engine produced 198 million results in 0.46 seconds.

What actually happened? Google searched its database, found 198 million results that it determined were relevant to my search expression, ranked them using a proprietary set of about 200 criteria, and told me that it had taken less than one-half of a second to complete this task.

A Digression

I want to digress for a minute. Twenty years ago in the United States, the typical public secondary school had a library and a librarian, and had holdings of about 20 books per student. Thus, a school with about a thousand students was apt to have a library containing about 20,000 books plus a wide variety of journals, audiovisual materials, pictures, etc. (NCES, 2005, link).

The books would be catalogued in a card catalog to aid students, teachers, and others in finding a book on a particular topic. The card catalog would contain an average of about six cards per book, including an author card, a title card, and usually one or more subject cards. So, the 1,000-student school might well have 80,000 or more cards in its card catalog.

The library card catalog was a great development, as was shelving books in a systematic manner, usually according to the Dewey Decimal or Library of Congress classification systems. These are a big help to a student who is seeking information. However, finding information about a very specific topic would often thwart a student. The school librarian, already thoroughly familiar with the library’s holdings, might still spend hours helping the student. Contrast this with being able to search the Web on any imaginable topic and receive a response in just seconds. And, the Web library being searched is many tens of thousands of times as large as the school library!

Now, we return to my Google search for information about the world’s fastest computer, and the results boggle my mind. The fact that Google found 198 million hits tells me that it searched a huge database. I wonder how large Google’s database is, and if it is the entire Web? So, I proceeded with another Google search—this time on the expression size of the database searched by Google. Google responded with the information it had found 493 million results in 0.44 seconds. I spent a couple of minutes browsing the short descriptions of some of its top-listed results, and finally settled on an article about Google Search (Wikipedia, 2020c, link):

Google Search, also referred to as Google Web Search or simply Google, is a web search engine developed by Google. It is the most used search engine on the World Wide Web across all platforms, with 92.62% market share as of June 2019 handling [averaging] more than 5.4 billion searches each day.

The order of search results returned by Google is based, in part, on a priority rank system called "PageRank".

Very interesting numbers. The number 5.4 billion searches a day is roughly 60 thousand searches a second. The number 5.4 billion can be compared with the earth’s total population of about 7.8 billion people to determine that Google does about 2/3 of a search per day for every person on earth. This requires the use of hundreds of thousands of very fast computers. At the Internet Live Stats website, you can view a timeclock counting off the number of Google searches conducted so far in the current day (Internet Live Stats, n.d., link).

However, this data does not answer my original question about the size of the database searched by Google. Further browsing of the results from my search produced a website that partially answered my question: “The Indexed Web contains at least 5.5 billion pages” (de Kunder, 5/22/2020, link).

But, what is a page? Still another Google search provided the information that an average Web page has been increasing in size and is now about four million bytes. One byte is eight bits, and stores an alphabetic character, numeric character, punctuation mark, or blank space. Thus, a six-character word followed by a blank space uses seven bytes of computer memory storage. A 300-page novel is about a million bytes in length. I think of an average Web page as being roughly equivalent to four novels that do not contain any pictures or other graphics. Pictures take up a lot of computer storage space.

You might think that Google stores the full content of all of the documents it searches, but it does not. Doing so would violate the various copyright laws of countries throughout the world. What Google does do is develop an individual index for each of the billions of documents that the Google Search engine will be instructed to search. This indexing process is carried out by a computer, by indexing each document under each word it contains (Google, 2020a, link).

My recent Google search on the word and produced about 25,270,000,000 results in 0.46 seconds, and produced the same results when I searched on the word the. This suggests to me that at the current time Google is searching more than 25 billion individual documents, and that doing one such search and processing the results takes about one half-second. To carry out an average of 60,000 searches per second, Google makes use of many tens of thousands of very fast computers.

These computers make use of huge, very fast, internal and external storage units. This reminds me of the time in the late 1970s when microcomputers were beginning to be widely accepted as aids to business and other data processing. I was very impressed when I heard about a five-megabyte, $5,000 disk drive for microcomputers. Now, $40 will get me a 256-gigabyte thumb drive. Counting inflation, this is a decrease in cost per byte stored by a factor of more than 15 million. A 256-gigabyte thumb drive stores the equivalent of about 256,000 novels that average 300 pages each.

Of course, a thumb drive is slower than a disk drive. I recently purchased a four-gigabyte backup disk drive for my desktop computer for under a hundred dollars. In terms of the cost per byte of storage, and taking inflation into account, this is a decrease in cost by a factor of about 12 million over the past 40 years!

Artificial Intelligence

The development of aids to human physical capabilities has been a very important part of our history. Horses, oxen, dogs (to pull dogsleds), and other draft animals were domesticated and provided power to supplement and sometimes replace human muscle power. I doubt if anyone thought to use the expression Artificial Muscle at the time. However, just for the fun of it, I once wrote a blog titled Artificial Intelligence and Artificial Muscle (Moursund, 2/16/2011, link). The blog has had more than 21,000 hits.

The term horse power is part of our vocabulary, and is traced back to1776 when James Watt was trying to compare the power of the steam engine he had developed to the power of the horses previously used to help pump water out of mines. Horse power has a precise scientific definition and is easy measurable (Rouse, 2020, link):

A horse-power is a measure of energy—550 foot-pounds per second A foot-pound is a unit of work equal to the work done by a force of one pound acting through a distance of one foot in the direction of the force.

Thus, we can measure muscle power (horse power) precisely, but how does one define and measure brain power or artificial intelligence? When it became apparent that electronic digital computers could perform some intelligent-like tasks, the terms Artificial Intelligence in the U.S. and Machine Intelligence in Europe were developed to describe this phenomenon. People did not choose to use the expression artificial brain power or artificial cognition.

For many years we have had a variety of IQ tests that can be used to assign an Intelligence Quotient number to a person. However, this lacks precision, partly because no two brains are identical. A single number can represent energy in terms of horse power being produced by an engine, but a single number cannot represent the huge number of different capabilities of a human brain.

If the only thing we wanted a computer to do was to perform the arithmetic operations of addition, subtraction, multiplication, and division rapidly and accurately, we could compare this to average human ability to do the same tasks. Within this very narrow range of intelligent behavior, even the first commercially available computers in the early 1950s were many thousands of times as intelligent as humans.

We know that human intelligence is far more than the ability to do arithmetic operations. How does one define and measure human-like intelligence in a computer? Early AI researchers selected concrete examples, such as playing a game of checkers or chess. If a computer could be taught (programmed) to play checkers or chess well, this would indicate progress in developing AI.

People working in AI can point to progress the field has made in developing computers that can outperform humans in a variety of areas. It was a milestone when a computer defeated the reigning world chess champion in 1997. It was another milestone when a computer defeated two past human champions in the TV game of Jeopardy in 2011. Then, in 2016, a computer defeated a world champion player in the game of Go, a game that is far more complex than chess. Also, today’s computers can now process voice input and produce voice output. From time to time, I make use of the free software named Google Translate that currently can provide translations between 109 different human languages (Google, 2020b, link).

However, it turns out that all of these rather amazing computer accomplishments are done by machines having absolutely no understanding of what they are doing. I like the analogy of an industrial manufacturing tool stamping out metal parts that will later be assembled into a product to be sold and used. The machine is much faster and more precise than humans attempting to accomplish the same task using hand tools, but we do not in any way think of such a machine as having human-like artificial muscle power.

Many researchers in AI are working to develop Artificial General Intelligence (AGI) (Wikipedia 2020b, link):

Artificial General Intelligence (AGI) is the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. Some academic sources reserve the term "strong AI" for machines that can experience consciousness. Today's AI is speculated to be decades away from AGI.

In contrast to strong AI, weak AI (also called narrow AI or weak AI) is not intended to perform human cognitive abilities, rather, weak AI is limited to the use of software to study or accomplish specific problem solving or reasoning tasks. [Bold added for emphasis.]

Today’s AI research is being heavily funded by governments, for-profit corporations, and non-profit corporations throughout the world. For example, the Allen Institute is a privately funded, non-profit, bioscience research institute founded by Paul Allen, one of the founders of Microsoft.

The Allen Institute conducts large-scale research with a commitment to an open science model within its research institutes, the Allen Institute for Brain Science, the Allen Institute for Cell Science, the Allen Institute for Immunology, and the Allen Institute for AI (Wikipedia, 2020a, link).

The company OpenAI, Inc. provides an interesting twist on non-profit and for-profit companies engaged in AI research. It was founded on December 11, 2015 as a non-profit by Elon Musk and five other people. However:

The Tesla CEO parted ways with the lab in 2018, and last year it became a for-profit company and took a $1 billion investment from Microsoft. OpenAI’s leaders claim that only by commercializing its research for the benefit of investors can it raise the billions needed to keep pace on the frontiers of AI (Simonite, 6/11/2020, link).

In 2019, Microsoft invested a billion dollars in OpenAI to be used by a Microsoft-owned company to build a computer for OpenAI (Langston, 5/19/220, link):

But this is no run-of-the-mill supercomputer. It’s a beast of a machine. The company said it has 285,000 CPU cores, 10,000 GPUs [Graphic Processing Units], and 400 gigabits per second of network connectivity for each GPU server.

Stacked against the fastest supercomputers on the planet, Microsoft says it’d rank fifth.

“As we’ve learned more and more about what we need and the different limits of all the components that make up a supercomputer, we were really able to say, ‘If we could design our dream system, what would it look like?’” said OpenAI CEO Sam Altman. “And then Microsoft was able to build it.”

What will OpenAI do with this dream-machine? The company is building ever bigger narrow AI algorithms—we’re nowhere near Artificial General Intelligence yet … [Bold added for emphasis.]

This particular computer includes 10,000 very fast microcomputers, each with large amount of very fast memory, and each connected to others by very fast connectivity. This new computer is a virtual machine. This means that it is a computer being simulated on physical computers that Microsoft has in one or more of its currently existing computer centers.

Perhaps this statement becomes more understandable if I relate it to cloud storage. You know that computer data is not really stored in clouds in the sky. The storage space is in physical storage devices residing in computer centers. A person or company can rent the amount of computer storage it needs, and this can be changed by merely asking for more storage or less storage, paying only for the amount needed at a particular time.

Online Shopping Is Changing the World

The focus of this section is on the many ways that computer speed, connectivity, and AI are changing retail shopping. It uses the company named Amazon, established in 1994 by Jeff Bezos, as an example.

The first electronic digital computers were self-contained, stand-alone machines. It took years of research before people first succeeded in connecting two computers (Computer Hope, 2020, link):

The Internet was officially born, with the first data transmission being sent between UCLA and SRI, on October 29, 1969, at 10:30 p.m.

Ray Tomlinson sent the first e-mail in 1971.

From these initial connectivity efforts, it was not too big a step to selling electronic books and other digital content that could be ordered and delivered online. From there, an online sales company could move on to taking orders online, and then delivering physical products using distribution systems that had been established in the past. But, this certainly did not occur overnight. The story of Jeff Bezos, who founded Amazon in 1994, is a truly amazing business success story (Wikipedia, 2020d, link):

After reading a report about the future of the Internet that projected annual web commerce growth at 2,300%, Bezos created a list of 20 products that could be marketed online. He narrowed the list to what he felt were the five most promising products, which included: compact discs, computer hardware, computer software, videos, and books. [All lent themselves to be sold online, but delivered as physical products.] Bezos finally decided that his new business would sell books online, because of the large worldwide demand for literature, the low unit price for books, and the huge number of titles available in print. Amazon was founded in the garage of Bezos’ rented home in Bellevue, Washington. Bezos' parents invested almost $250,000 in the start-up.

Amazon makes substantial use of computers and AI. The following short quotes are from an article by Stephanie Condon, based on information he presented at an Amazon-sponsored conference (Condon, 6/5/2019, link):

“Amazon has been a technology company from the start,” Jeff Wilke, CEO of Amazon's Worldwide Consumer division, said during the Wednesday keynote. That said, he added, “We're only in the beginning stages of truly understanding the potential” of AI.

Now automation is a key part of Amazon operations. The company is on a path to deploying more than 200,000 robots, Wilke said.

Amazon also revealed newest fulfillment center robots, called Pegasus and Xanthus, as well as a new drone that Amazon says will start commercial deliveries within months.

Throughout, “We didn't sequester our scientists,” he said. Instead, data scientists were integrated into teams focused on the product and customer experience. “They start with the customer experience, not the machine learning algorithm,” he said.

At the end of 2019, Amazon had about 840,000 employees, its 2019 sales were $280 billion, and its 2019 profit was $14.5 billion (Wikipedia, 2020d, link). When the coronavirus began to seriously affect the United States, Amazon quickly added more than 175,000 temporary new employees. It is likely that 100,000 or so of these will become permanent employees (Dumaine, 5/13/2020, link). Thus, at the time this newsletter is being written, Amazon had approximately one million employees.

Amazon’s delivery system is one reason for its success. Computers play a major role in this system. Amazon customers who pay a yearly fee for a Prime Membership receive free one-day delivery (Lee, 1/30/2020, link):

“Prime membership continues to get better for customers year after year. And customers are responding — more people joined Prime this quarter than ever before, and we now have over 150 million paid Prime members around the world,” Bezos said in a statement. “We’ve made Prime delivery faster — the number of items delivered to U.S. customers with Prime’s free one-day and same-day delivery more than quadrupled this quarter compared to last year. Members now have free two-hour grocery delivery from Amazon Fresh and Whole Foods Market in more than 2,000 U.S. cities and towns.

In summary, mega online stores such as Amazon have changed retail selling. They also have made an impact on the daily lives of many of us, and threaten the continuing existence of many local stores and shopping centers. Amazon sells more than 12 million different products (Amazon Blog, 2020, link). During the past few months its sales have increased substantially due to the coronavirus pandemic. Other companies also have seen substantial increases in their online sales. For example, telemedicine has expanded greatly.

Final Remarks 

We have explored only a few of the many ways that computer technology is changing our world. The pace of change has been rapid and the cumulative results rather overwhelming. This newsletter touched on some of the underlying, computer-related aspects of these changes. These include connectivity, computer speed, AI, information storage and retrieval, and the rapid expansion of online shopping.

All of these changes directly impact formal education. The first four have obvious ramifications for our schools. The fifth tells us that employment and business opportunities are changing. It is time for schools to evaluate these changes in order to  better provide students with an education that will help to prepare them for their futures.

The next newsletter looks at the future of education and how computer technology will help to shape that future.

References and Resources

Amazon Blog (2020). Amazon statistics you should know: Opportunities to make the most of America’s top online marketplace. Retrieved 6/9/2020 from

Andrews, E. (11/17/2016). 8 legendary ancient libraries. History. Retrieved 6/9/2020 from

AT&T (3/21/2011). C S Longlines. (Video, 28:14.) Retrieved 6/6/2020 from

AT&T Archives (5/15/2014). Cable to the continent. (Video, 14:58.) Retrieved 6/6/2020 from

Computer Hope (2020). Computer networking history. Retrieved 5/25/2020 from

Condon, S. (6/5/2019). Amazon shares how it leverages AI throughout the business. ZDNet. Retrieve 6/8/2020 from

de Kunder, M. (5/22/2020). The size of the World Wide Web. Retrieved 5/22/2020 from

Dumaine, B. (5/13/2020). Amazon was built for a pandemic (YouTube Video,7:05.) Retrieved 5/31/2020 from

Encyclopaedia Britannica (2020). Herman Hollerith. Retrieved 6/4/2020 from

Google (2020a). Google Search. Retrieved 6/12/2020 from

Google (2020b). Google Translate. Retrieved 6/12/2020 from

Harris, M. (2/6/2020). Google and Facebook turn their backs on undersea cable to China. Retrieved 6/7/2020 from

Internet Live Stats (n.d.). Google search statistics. Retrieved 5/22/2020 from

Langston, J. (5/19/2020). Microsoft announces new supercomputer, lays out vision for future AI work. The AI Blog. Retrieved 6/2/2020 from

Lee, N. (1/30/2020). Amazon has 150 million Prime members now. engadget. Retrieved 5/28/2020 from

Moursund, D. (2019). Information Age. IAE-pedia. Retrieved 5/25/2020 from

Moursund, D. (2018a). The fourth R (Second edition). Eugene, OR: Information Age Education. Retrieved 5/26/2020 from

Moursund, D. (2018b). La cuarta R (Segunda edición). Eugene, OR: Information Age Education. Retrieved 5/26/2020 from

Moursund, D. (2/16/2011). Artificial intelligence and artificial muscle. IAE Blog. Retrieved 5/31/2020 from

NCES (2005). America’s public school libraries: 1953-2000. National Center for Educational Statistics. Retrieved 6/7/2020 from

Nguyen, C. (2/28/2019). Microsoft and Facebook’s undersea Marea cable breaks data transfer speed record. Digital Trends. Retrieved 6/9/2020 from

O'Callaghan, J. (4/21/2020). What are those strange moving lights in the night sky? Elon Musk’s ‘Starlink’ satellites explained. Forbes. Retrieved 6/7/2020 from

Rouse, M. (2020). Horsepower (hp). Retrieved 6/10/2020 from

Routley, N. (8/24/2017). Map: The world’s network of submarine cables. Maps. Retrieved 6/7/2020 from

Sack, H. (7/7/2019). Joseph Marie Jacquard and the Programmable Loom. SciHi Blog. Retrieved 6/4/2020 from

Shedden, D. (10/29/2014). Today in media history: The Internet began with a crash on October 29, 1969. Poynter. Retrieved 6/4/2020 from

Social Sciences and Humanities Research Council of Canada (6/26/2017). The transatlantic telegraph cable. (Video, 14:51.) Retrieved 6/6/2020 from

University of Washington, Tacoma (n.d.). History: Primary & secondary sources. Retrieved 5/22/2020 from

Wikipedia (2020a). Allen Institute. Retrieved 6/8/2020 from

Wikipedia (2020b). Artificial General Intelligence. Retrieved 8/10/2020 from

Wikipedia (2020c). Google Search. Retrieved 5/22/2020 from

Wikipedia (2020d). History of Amazon. Retrieved 5/28/2020 from

Wikipedia (2020e). Optical fiber. Retrieved 6/7/2020 from

Wikipedia (2020f). Transatlantic communication cable. Retrieved 8/11/2020 from


David Moursund is an Emeritus Professor of Education at the University of Oregon, and editor of the IAE Newsletter. His professional career includes founding the International Society for Technology in Education (ISTE) in 1979, serving as ISTE’s executive officer for 19 years, and establishing ISTE’s flagship publication, Learning and Leading with Technology (now published by ISTE as Empowered Learner). He was the major professor or co-major professor for 82 doctoral students. He has presented hundreds of professional talks and workshops. He has authored or coauthored more than 60 academic books and hundreds of articles. Many of these books are available free online. See .

In 2007, Moursund founded Information Age Education (IAE). IAE provides free online educational materials via its IAE-pedia, IAE Newsletter, IAE Blog, and IAE books. See . Information Age Education is now fully integrated into the 501(c)(3) non-profit corporation, Advancement of Globally Appropriate Technology and Education (AGATE) that was established in 2016. David Moursund is the Chief Executive Officer of AGATE.


Reader Comments

We are using the Disqus commenting system to facilitate comments and discussions pertaining to this newsletter. To use Disqus, please click the Login link below and sign in. If you have questions about how to use Disqus, please refer to this help page.

Readers may also send comments via email directly to

About Information Age Education, Inc.

Information Age Education is a non-profit organization dedicated to improving education for learners of all ages throughout the world. Current IAE activities and free materials include the IAE-pedia at, a Website containing free books and articles at, a Blog at, and the free newsletter you are now reading. See all back issues of the Blog at and all back issues of the Newsletter at