- Audio /
Dec 31, 2014
Promotes Global Unity, Social Betterment and a More Humane Society Read More >
Sep 12, 2014
Features Live Music, Short Films, Comedy and Art, Promotes Social Consciousness Through the Power of Art Read More >
Mar 01, 2014
Toronto Main Event and Beyond Read More >
Feb 03, 2014
A New Book by The Zeitgeist Movement Read More >
Jul 01, 2013
"Changing the World Through Socially Conscious Art" Read More >
More Press Releases >
Apr 15, 2015 Host: James Phillips
This week’s episode of TZM global is hosted by James Phillips from TZM education and the UK chapter of TZM. Along with some brief news from the movement James will be reading the next two articles from the minds in the making section of www.tzmeducation.org entitled 'Building a learning environment' and 'Building a new paradigm from the inside out'.
These articles outline educational and transitional models/methods that could help to shape the mindset required for a sustainable socio-economic system to emerge.
Apr 11, 2015 Host: James Phillips
TZM Global Ep. 171 with Jim Phillips: Education as a tool for Social Change [ The Zeitgeist Movement ]
This weeks episode of TZM global is hosted by TZM Education co-ordinator and UK chapter member James Phillips.
Along with a ZDAY Berlin round up and some other news, James will be reading the next two articles from the TZME website: www.tzmeducation.org regarding the link between educational and societal structure and how we will need a radical change in both if we are to start to see a shift in our overall cultural values towards the adoption of a sustainable socio-economic system.
Apr 08, 2015 Host: Peter Joseph
TZM Global Ep. 170 with Peter Joseph, Ep 170 April 8th 2015, Zeitgeist Day 2015Lectures, Cont.
Featured talks: Brandon Kristy / Eva Omori, ZDay 2015, Berlin Germany
Apr 01, 2015 Host: Peter Joseph
TZM Global Ep. 169 with Peter Joseph, April 1st 2015
Featured ZDay Berlin 2015 talks:
1) Lee Camp
2) Jim Phillips
James Phillips, United Kingdom James Phillips is the co-coordinator of TZM Education: A global initiative to enable TZM members to go into educational institutions and deliver the movement’s train of thought to the next generation. He is a regular host of the movement’s global radio show and a regular speaker at several events in the UK pertaining to sustainability and societal structure for both TZM and 3rd party organizations. He also helps to co-ordinate the London chapter of the movement in the UK and goes into Schools on a regular basis to talk to the younger generations about various topics ranging from human behaviour to sustainable technology. Presentation: TZM Education: Launchpad Sustainability Regarding a strategic and effective approach to activism. What works, what doesn’t and what even counts as a metric when trying to measure such a thing. I will be elucidating why much of what we currently do as a movement could potentially be nowhere near as effective as what could be achieved by going into Schools and talking to kids in a joint and strategic effort. Lee Camp, United States*
Lee Camp is the head writer and host of the weekly comedy news show ‘Redacted Tonight with Lee Camp’ on RT America. He’s a former contributor to The Onion, former staff humor writer for the Huffington Post, and his web series “Moment of Clarity” has been viewed by millions. He’s toured the country and the world with his fierce brand of political stand-up comedy, and George Carlin’s daughter Kelly said he’s one of the few comics keeping her father’s torch lit. His TV show and podcast can be found at LeeCamp.net, as can his comedy albums and books.
Mar 25, 2015 Host: Peter Joseph
LIKE The Zeitgeist Movement @ https://www.facebook.com/tzmglobal FOLLOW The Zeitgeist Movement @ https://twitter.com/tzmglobal JOIN THE MAILING LIST: http://www.thezeitgeistmovement.com/
Peter Joseph plays audio from Zeitgeist Day 2015, Berlin Germany. Ben McLeish: "The Zeitgeist WorldView" Peter Joseph: "Origins and Adaptations P3"
Conventional wisdom would have you believe that most people enter adolescence with a head full of high-minded ideals and a willingness to shake up the system. As they get older, however, they gradually begin to accept the status quo. For me, that process is reversed.
The older I get, the more skeptical I become of our current social model. Why?
Let’s start with this:
It should be of increasing concern to all Americans that there is an extreme disconnect between what Americans believe about man-made climate change, and what science tells us about it. That is to say, despite there being a clear scientific consensus, man-made climate change is more often than not framed as an ambiguous concept in the U.S. mainstream media. Consequently, climate change is generally thought to be far more esoteric than it actually is.
INTRODUCTION AND DISCLAIMER 
The purpose of this project is to enable supporters of a natural law resource based economic model (NLRBE) to understand and appreciate the need to approach the education system in an effort to initiate the value shift required for a more peaceful and sustainable future to emerge.
Today I was reading The Zeitgeist Movement Defined: Realizing a New Train of Thought, again. I did so because I feel the need to express certain frustration on this/my social movement but haven’t found the right words. Also I didn’t want to make any false assumptions on its architecture, so I went straight to the source with a pen in my hand.
I went through the 9 pages that constitute the overview and extracted some notes I would like to post in here:
We need more films about the social, ecological and economic change!
We want to make one and you could help us.
In our Documentary "The Taste of Life" we want to show, that there are people in the whole world, already practicing this change in a great way.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
From social symptom to root causes came about as a bi-product of ZDAY 2013 in London, in which all but the introductory talk featured exterior organisations and speakers. Each of whom seek to address a particular social or environmental issue closely aligned with the movement’s materials.
Transcript below. Can also be viewed via PDF HERE.
Welcome to: “3 Questions - What do you propose?” This thought exercise is intended for both the average person, concerned about global problems – along with those who are still confused about - or perhaps even in opposition to The Zeitgeist Movement.
Imagine your child requires a life-saving operation. You enter the hospital and are confronted with a stark choice.
Do you take the traditional path with human medical staff, including doctors and nurses, where long-term trials have shown a 90% chance that they will save your child’s life?
Or do you choose the robotic track, in the factory-like wing of the hospital, tended to by technical specialists and an array of robots, but where similar long-term trials have shown that your child has a 95% chance of survival?
Most rational people would opt for the course of action that is more likely to save their child. But are we really ready to let machines take over from a human in delivering patient care?
Of course, machines will not always get it right. But like http://www.theguardian.com/world/shortcuts/2013/sep/27/safer-pilot-asleep-awake-autopilot ">autopilots in aircraft, and the http://www.theinquirer.net/inquirer/feature/2426988/humans-vs-robots-driverless-cars-are-safer-than-human-driven-vehicles ">driverless cars that are just around the corner, medical robots do not need to be perfect, they just have to be better than humans.
So how long before robots are shown to perform better than humans at surgery and other patient care? It may be sooner, or it may be later, but it will happen one day.
But what does this mean for our hospitals? Are the new hospitals being built now http://www.wired.com/2015/02/incredible-hospital-robot-saving-lives-also-hate/ ">ready for a robotic future? Are we planning for large-scale role changes for the humans in our future robotic factory-like hospitals?
Our future hospitals
Hospitals globally have been http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0016395 ">slow to adopt robotics and artificial intelligence into patient care, although both have been widely used and tested in other industries.
Medicine has traditionally been http://consumer.healthday.com/general-health-information-16/doctor-news-206/u-s-doctors-slow-to-adopt-electronic-health-records-673677.html ">slow to change, as safety is at its core. Financial pressures will inevitably force industry and governments to recognize that when robots can do something better and for the same price as humans, the robot way will be the only way.
What some hospitals have done in the past 10 years is recognize the potential to be more factory-like, and hence more efficient. The term “https://operationsroom.wordpress.com/2010/11/18/hospitals-as-focused-factories/ ">focused factories” has been used to describe some of these new hospitals that specialize in a few key procedures and that organize the workflow in a more streamlined and industrial way.
They have even tried “http://healthydebate.ca/2014/09/topic/lean ">lean processing” methods borrowed from the car manufacturing industry. One idea is to free up the humans in hospitals so that they can carry out more complex cases.
Some people are nervous about http://www.huffingtonpost.com/barron-h-lerner/hospital-care_b_1511120.html ">turning hospitals into factories. There are fears that “lean” means cutting money and hence employment. But if the motivation for going lean is to do more with the same, then it is likely that employment will change rather than reduce.
Medicine has long been segmented into many http://www.medicalboard.gov.au/Registration/Types/Specialist-Registration/Medical-Specialties-and-Specialty-Fields.aspx ">specialized fields but the doctor has been expected to travel with the patient through the full treatment pathway.
A surgeon, for example, is expected to be compassionate, and good at many tasks, such as diagnosing, interpreting tests, such as X-rays and MRIs, performing a procedure and post-operative care.
As in numerous other industries, new technology will be one of the drivers that will change this traditional method of delivery. We can see that one day, each of the stages of care through the hospital could be largely achieved by a computer, machine or robot.
Some senior doctors http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2676526/ ">are already seeing a change and they are worried about the de-humanising of medicine but this is a change for the better.
Safety first but some AI already here
Our future robot-factory hospital example is the end game, but many of its components already exist. We are simply waiting for them to be tested enough to satisfy us all that they can be used safely.
There are programs to https://theconversation.com/digital-diagnosis-intelligent-machines-do-a-better-job-than-humans-53116 ">make diagnoses based on a series of questions, and algorithms inform many treatments used now by doctors.
Surgeons are already using https://theconversation.com/marking-ten-years-of-surgical-robots-in-a-theatre-near-you-20285 ">robots in the operating theatre to assist with surgery. Currently, the surgeon remains in control with the machine being more of a slave than a master. As the machines improve, it will be possible for a trained technician to oversee the surgery and ultimately for the robot to be fully in charge.
Hospitals will be very different places in 20 years. Beds will be able to http://newsroom.uts.edu.au/news/2014/11/bed-robots-future-patient-transportation ">move autonomously transporting patients from the emergency room to the operating theatre, via X-ray if needed.
Triage will be done with the assistance of an http://www.wired.com/2014/06/ai-healthcare/ ">AI device. Many decisions on treatment will be made with the assistance of, or by, intelligent machines.
Your medical information, including medications, will be read from a http://www.smh.com.au/digital-life/digital-life-news/human-microchipping-ive-got-you-under-my-skin-20140416-zqvho.html ">chip under your skin or in your phone. No more waiting for medical records or chasing information when an unconscious patient presents to the emergency room.
Robots will be able to http://www.businesswire.com/news/home/20150723005636/en/Panasonic-Autonomous-Delivery-Robots---HOSPI-- ">dispense medication safely and rehabilitation will be http://infinigeek.com/5-amazing-ways-that-robots-are-being-used-in-medicine/ ">robotically assisted. Only our imaginations can limit how health care will be delivered.
Who is responsible when things go wrong?
The hospital of the future http://www.impactlab.net/2012/09/11/technology-will-replace-80-of-doctors-vinod-khosla/ ">may not require many doctors, but the numbers employed are unlikely to change at first.
Doctors in the near future are going to need many different skills than the doctors of today. An understanding of technology will be imperative. They will need to learn http://ww2.kqed.org/futureofyou/2015/12/14/5-good-reasons-why-doctors-are-learning-to-code/ ">programming and computer skills well before the start of medical school. Programming will become the fourth literacy along with reading, http://www.theguardian.com/world/2015/jul/31/finnish-schools-phase-out-handwriting-classes-keyboard-skills-finland ">writing (which may vanish) and arithmetic.
But who will people sue if https://enlightenme.com/top-10-medical-malpractice/ ">something goes wrong? This is, sadly, one of the first questions many people ask.
Robots will be performing tasks and many of the diagnoses will be made by a machine, but at least in the near future there will be a human involved in the decision-making process.
Insurance costs and litigation will hopefully reduce as machines perform procedures more precisely and with fewer complications. But who do you sue if your medical treatment goes tragically wrong and no human has touched you? That’s a question that still needs to be answered.
So too is the question of whether people will really trust a machine to make a diagnosis, give out tablets or do an operation?
Perhaps we have to accept that humans are far from perfect and mistakes are inevitable in health care, just as they are when we put humans behind the wheel of a car. So if driverless cars are going to reduce traffic accidents and congestion then maybe doctorless hospitals will one day save more lives and reduce the cost of health care?http://i2.wp.com/counter.theconversation.edu.au/content/54316/count.gif?resize=1%2C1 " alt="The Conversation" data-recalc-dims="1" />
http://theconversation.com/profiles/anjali-jaiprakash-218240 ">Anjali Jaiprakash, Post-Doctoral Research Fellow, Medical Robotics, http://theconversation.com/institutions/queensland-university-of-technology ">Queensland University of Technology; http://theconversation.com/profiles/jonathan-roberts-94843 ">Jonathan Roberts, Professor in Robotics, http://theconversation.com/institutions/queensland-university-of-technology ">Queensland University of Technology, and http://theconversation.com/profiles/ross-crawford-219173 ">Ross Crawford, Professor of Orthopaedic Research, http://theconversation.com/institutions/queensland-university-of-technology ">Queensland University of Technology
This article was originally published on http://theconversation.com ">The Conversation. Read the https://theconversation.com/robots-in-health-care-could-lead-to-a-doctorless-hospital-54316 ">original article.
Image Credit: http://www.shutterstock.com ">Shutterstock.com
“We will find new things everywhere we look.” –Hunter S. Thompson
At the rate of 21st century technological innovation, each year brings new breakthroughs across industries. Advances in quantum computers, http://singularityhub.com/2014/02/02/illumina-claims-new-sequencer-transcribes-18000-genomes-per-year-at-1000-each/ " target="_blank">human genome sequencing for under $1,000, http://singularityhub.com/2013/08/05/panel-tastes-synthetic-lab-grown-burger-backed-by-sergey-brin/ " target="_blank">lab-grown meat, http://singularityhub.com/2015/11/22/bugs-as-drugs-seeking-microbial-cures-inside-body/ " target="_blank">harnessing our body’s microbes as drugs, and http://singularityhub.com/2016/01/13/blind-woman-receives-bionic-eye-reads-a-clock-with-elation/ " target="_blank">bionic eye implants that give vision to the blind—the list is long.
As new technologies push the boundaries of their respective industries, fields are now maturing, growing, and colliding with one another. This cross-pollination of ideas across industries and countries has changed the world—and will continue to—and it’s one of the reasons Singularity University exists.
The first SU Salon, a gathering for professionals of varying backgrounds but common interest in innovation, recently took place at our campus in Silicon Valley.
The event featured three speakers from distinct sectors—biotech, cybersecurity, and music—and was an open forum to connect with local technologists, innovators, and most importantly, to cross-pollinate ideas.
If you weren’t able to make it, below is a glimpse into each speaker's presentation.
Ryan Bethencourt: The future of biotech
Program Director and Venture Partner at http://sf.indiebio.co/ " target="_blank">IndieBio
“Our world is built on biology and once we begin to understand it, it then becomes a technology. –Ryan Bethencourt
When most people hear the word biotech they think of syringes, new cancer treatments, and cutting-edge disease therapies. Though this is biotech, it’s just one vertical.
Ryan Bethencourt, a biohacker, entrepreneur, and program director and venture partner of biology accelerator IndieBio, spoke about four primary areas of acceleration in biotech—food, biomaterials, computation, and medicine.
Bethencourt broke down how biology is being applied as a technology in each of these areas and highlighted companies to keep an eye on:
- Food: http://impossiblefoods.com/ " target="_blank">Impossible Foods—making real burgers that bleed from plant cells. The company recently turned down an acquisition offer from Google for $200 million, so stay tuned.
- Biomaterials: https://boltthreads.com/ " target="_blank">Bolt Threads—brewing spider silk in yeast and turning it into an outstandingly durable material with applications in the industrial space. The company recently raised roughly $40 million in funds.
- Computation: http://www.koniku.uk/ " target="_blank">Koniku—pioneering http://www.fastcoexist.com/3055566/combining-human-neurons-with-machines-makes-a-truly-powerful-computer " target="_blank">neuron-powered computation by harnessing the power of biological neurons to create the next generation of supercomputers.
- Medicine: Organogenesis Inc.—developing regenerative medicine such as bioactive wound healing and soft tissue regeneration. Next up in this industry may be the ability to http://singularityhub.com/2015/12/31/printable-organs-are-closer-than-ever-thanks-to-three-bioprinting-breakthroughs/ " target="_blank">build human organs like lungs and hearts.
Siobhan MacDermott: The state of cybersecurity
Principal of Risk and Cybersecurity at http://www.ey.com/US/en/Industries " target="_blank">Ernst & Young
“[Though] many people in DC know little about Internet security and privacy…[they] are the ones trying to reform it.” -Siobhan MacDermott
When Siobhan MacDermott began working in the field of cybersecurity in the 1990s, companies across the board could not grasp why they needed Internet security software. It seemed foolish and unreasonable. Jump forward to 2016, and the need is clear. It’s projected there will be one million unfilled cybersecurity related jobs in 2020, if we continue at the current rate of education for this field.
MacDermott is one of the foremost experts on the future cybersecurity and privacy and is principal of risk and cybersecurity at Ernst & Young where she coaches Fortune 100 companies, NGOs, and the government on best practices and strategies for Internet security. She is also the vice chair at the http://global.fundforpeace.org/aboutus " target="_blank">Fund for Peace.
In her talk, MacDermott explored pressing cybersecurity issues such as how to balance surveillance and privacy—a subject gaining global attention, and also one that has been front-and-center in recent US presidential debates.
MacDermott highlighted how, at the diplomatic level, the exchange and security of information is under mass scrutiny, and pointed to additional players, such as “hacktivist” groups like https://en.wikipedia.org/wiki/Anonymous_(group) " target="_blank">Anonymous, and http://www.thedailybeast.com/articles/2015/08/04/hillary-clinton-s-mega-donors-are-also-funding-jeb-bush.html " target="_blank">campaign-aligned corporations.
Tamer Rashad: Democratizing music
Founder and CEO of http://www.humtap.com/ " target="_blank">Humtap
Music, according to Tamer Rashad, founder and CEO of http://www.humtap.com/ " target="_blank">Humtap, allows communities and cultures to transcend traditional boundaries of communication. But it’s expensive to produce high-quality music, and the industry is dominated by three major music labels.
Rashad said Humtap wants to democratize music creation with new technologies such as AI and machine learning to open music production to the masses.
Subscribe to http://singularityu.us4.list-manage.com/subscribe?u=cf8d60100fb6d439c559221f0&id=9c706260a1&group%5B11901%5D%5B16777216%5D=true&group%5B11901%5D%5B33554432%5D=true&group%5B11901%5D%5B67108864%5D=true&group%5B11901%5D%5B134217728%5D=true&group%5B11901%5D%5B268435456%5D=true&group%5B11905%5D%5B536870912%5D=true&group%5B11905%5D%5B17179869184%5D ">Singularity University's newsletter here to find out about upcoming SU Salons, events, and to receive curated content.
Unexpected convergent consequences: This is what happens when eight different exponential technologies all explode onto the scene at once.
An expert might be reasonably good at predicting the growth of a single exponential technology (e.g., the Internet of Things), but try to predict the future when the following eight technologies are all doubling, morphing and recombining. You have a very exciting (read: unpredictable) future.
2. Internet of Things (Sensors & Networks)
4. Artificial Intelligence
5. 3D Printing
6. Materials Science
7. Virtual/Augmented Reality
8. Synthetic Biology
This year at my Abundance 360 Summit I decided to explore this concept in sessions I called Convergence Catalyzers.
For each technology, I brought in an industry expert to identify their top five recent breakthroughs (2012-2015) and their top five anticipated breakthroughs (2016-2018). Then, we explored the patterns that emerged.
This post (the first of seven) is a look at networks and sensors (i.e., the Internet of Everything). Future posts will look at the remaining tech areas.
Networks and Sensors – Context
At A360 my first guest was Raj Talluri, the Senior VP of Product Management at Qualcomm, who oversees their Internet of Things (IoT) and mobile computing businesses. Here's some context before we dive in.
The Earth is being covered by an ever-expanding mesh of networks and sensors that form the Internet of Things (or the Internet of Everything). Think of the IoT as the network of all digitally accessible objects, estimated at 15 billion in number today, and expected to grow to more than 50 billion by 2020.
But what makes this even more powerful, is that each of these connected devices, are themselves made up of a dozen sensors measuring everything from vibration, position and light, to blood chemistries and heart rate.
Imagine a world rapidly approaching a trillion sensor economy where the IoT enables a data-driven future in which you can know anything you want, anytime you want, anywhere you want. A world of instant, high-bandwidth, communications and near perfect information.
The implications of this are staggering, and I asked Raj to share his top five breakthroughs from the past three years to illustrate some of them.
Recent Top 5 Breakthroughs (2013 – 2015)
Here are the breakthroughs Raj identified in networks and sensor technology from 2012-2015.
1. Emergence of Continuous Low-Power Always-On Sensors
One of the major advances from the past three years has been the proliferation of "always on" sensors.
As Raj explains, "You'll be amazed how many of your phone sensors are always on. If you look at your phone, there were times when you had to press the button to say hello Google or hi Siri. Now, you don't. You just talk to it and it figures it out."
"This has been made possible because you're now able to make very low power sensors that listen to you all the time, keyword detect and do the data processing."
2. Smartphones Drive Sensor Volume at Low Cost
The number of sensors in your smartphone today has exploded. Raj continues, "We are now seeing 10, 20 and even 30 sensors embedded in our smartphones. Things like proximity sensors when you pick your phone up, gyros, cameras, depth sensors and so on. This has really driven down cost and driven the discovery of new sensors, because there are a billion smartphones [sold] every year. It's a huge opportunity."
A billion phones means 20+ billion sensors — and we are headed towards a trillion sensor economy.
3. "Systems" Fuse Continuous Sensor Data and Cloud Processing
Seamless integration of processing is happening in the cloud and on your device. Raj explains, "When you say, 'Okay, Google,' a part of what happens next is on the phone and a part is on the cloud. You don't really know where the processing is being done, on your device or on the cloud, the handoff is seamless."
4. 4K Video Format Goes Mainstream
4K screen resolution is close to the point that the brain is unable to notice pixels. As such, somewhere between 4K and 8K, virtual reality become visually equal to visual reality.
Raj explains how this technology is exploding: "If you buy a 4K TV and watch 4K content, it's very hard to go back to 1080p. It almost feels like you were watching a VHS tape when DVDs came out. Today, if you look at what we've done at Qualcomm in the high-end processors space, we shipped over 200 to 250 million processors that actually record in 4K."
5. Opening of Sensor APIs to 3rd Party Apps Development Community
The reality is that the majority of phone apps now come from third party developers. This explosion in apps (perhaps 50 to 100 per phone) is only possible because of (i) the opening of the APIs for the sensors in the devices and (ii) the community of developers that has emerged as a result.
So what's in store for the near future?
Anticipated Top 5 Breakthroughs (2016 – 2018)
Here are Raj's predictions for the most exciting, disruptive developments coming in networks and sensors in the next three years.
As entrepreneurs and investors, these are the areas you should be focusing on, as the business opportunities are tremendous.
1. Wireless Network Densification (4G/5G): Cost / Megabit Plummets
The cost per megabit of connection is going to plummet – essentially nearing "free" in the very near future.
Raj expands, "Already in places like Indonesia, we find that people are actually getting data plans at a price of $5 a month. In most of the world, the cost per megabit is extremely low as the cost of launching networks is plummeting."
2. Emergent Peer-To-Peer Tech Drives Automotive Communication and Safety
Soon all of your devices at home and work (screens, thermostats, DVRs, computers, even cars) will automatically connect seamlessly. You won't have to make conscious decisions about how to connect your washing machine. When it finishes washing the clothes, you will get a notification on your phone."
3. Global Internet Connectivity via Satellite Plummets in Cost
Qualcomm, in partnership with Richard Branson, are working to deploying a 648 satellite constellation called OneWeb. Raj explains, "Global Internet connectivity through satellites is finally going to happen… Just think about three billion new people coming online at a megabit per second. It is going to be completely different kind of experience."
4. Exponential Growth in Connections to Internet from Various Devices — Personal/Home/Cities
Raj says, "I often ask people: how many IP addresses do you think you have at your house?" Most people have no clue. They say, "Maybe two or three..."
For Raj (and most of us) it's more like 50… your TVs, your set top boxes, phone, iPads, Nest, cameras, light bulbs…
"In the next few years, the number of things that will be connected to the Internet at any given point of time in your life is going to be so huge that the way they work is going to be very different. You won't need to reach for your phone to do something. Coupled with sensor networks, you'll just be able to speak and ask for what you want."
5. Major Improvements of Head-Mounted User Interfaces With Rich Bandwidth and Onboard Sensors
Over the next three years, we'll see rapid uptake of VR and AR headsets, each with 4K displays and cameras, and packed with a suite of sensors connected by high bandwidth communications to the cloud. The result is that each of us is wearing an incredible user interface with high-speed communications that will make our virtual experiences so good that you won't need to travel to experience something."
Image Credit: http://www.shutterstock.com ">Shutterstock.com
Have you ever walked into a room and forgot why you where there? Or while in the middle of conversation forgot a person’s name? Or briefing your boss on a project, only to stumble because a crucial factoid escaped your mind?
Yeah, me too.
“Tip of the tongue” syndrome haunts us all — that feeling where you’re close to remembering something, but just can’t seem to get there. But what if, at that exact moment, an AI-powered “cognitive assistant” pitches in and delivers that missing piece of information straight into your ear?
http://i0.wp.com/singularityhub.com/wp-content/uploads/2016/02/google-your-brain-3.jpg?resize=300%2C200 " alt="google-your-brain-3" data-recalc-dims="1" />That future may soon be here. In a https://www.google.com/patents/US9177257 ">patent published late last year, IBM described a sort of “automatic Google for the mind”: one that monitors your conversations and actions, understands your intentions and offers help only when you need it most.
The brainchild of computational neuroscientist http://researcher.watson.ibm.com/researcher/view_person_subpage.php?id=2817 ">Dr. James Kozloski, a master inventor at IBM Research, the cognitive digital assistant has lofty goals: by acting as an external memory search module, it hopes to help people with memory impairments regain the cognitive ability to navigate through life with minimal help.
For the rest of us? A searchable memory could give us the opportunity to make innovative connections, support brainstorming sessions and help us tackle more problems and think more deeply.
In a http://www.theatlantic.com/technology/archive/2016/01/sorry-dave-afraid-i-cant-do-that/431559/ ">recent interview with the Atlantic, Kozloski laid out his plans for a human-AI mind-meld future.
Context Is Key
To understand how an AI cognitive assistant works, we first need to look at why human memory fails.
One reason is context. We excel at memorizing stories — the whats, whos, whens and wheres. When we remember an event, we fit its different components together like a puzzle; because of its linked nature, any component can act as a trigger, fishing out the entire memory from the depths of our minds.
Yet often we have trouble finding the trigger: the memory is there, but we can’t access it. Some current apps — to-do lists, scheduling apps, contact lists — already help us remember by acting as a trigger. But they can’t help someone who needs a reminder to update and use those apps in the first place.
IBM’s cognitive assistant hopes to bridge this gap.
Acting as a model of the user’s memory and behavior, it surveys our conversations, monitors our actions and — using Bayesian inference, a probabilistic algorithm often used in machine learning — predicts what we want, detects when we need help and offers support.
If you’re thinking “whoa, that’s creepy,” you’re not alone.
http://i1.wp.com/singularityhub.com/wp-content/uploads/2016/02/google-your-brain-4.jpg?resize=300%2C200 " alt="google-your-brain-4" data-recalc-dims="1" />But according to Kozloski, we are already constantly monitored by our electronic devices. A Fitbit tracks your heart rate and movement, a http://www.nytimes.com/2016/02/02/science/new-wearable-sensor-can-collect-data-from-sweat.html?partner=rss&emc=rss&_r=0 ">sweat analyzer checks for dehydration and fatigue, http://motherboard.vice.com/blog/real-time-translation-devices-are-breaking-down-the-worlds-language-barriers ">augmented reality devices listen in on your conversations to offer real-time translations and suggest potential replies.
And http://www.bbc.co.uk/news/science-environment-35411685 ">the future of trackers is only getting more sophisticated and personal.
These data, combined with data from your environment, is then fed into the cognitive assistant. With enough data, the AI can compute a model of what a person is thinking or doing.
By analyzing word sequences and speech patterns, for example, it may detect whether you’re talking in a business setting or with a family member. It could similarly also monitor the words of your conversation partner and, using Bayesian inference, make an educated guess about who he or she is.
If you suddenly experience a word block, the AI would make a note of where the conversation lapsed. Then, using data from your previous speech recordings and the Internet, it could offer up words that you most likely had in mind for that particular context.
The system would work even better if your partner also wears a cognitive assistant device, Kozloski suggests. In that case, the two devices could share data to build a better model of what information you’re trying to access at that very moment.
If all this sounds abstract, here’s an example.
Imagine you’re calling a friend with whom you haven’t talked to recently. From the dial tone or your wrist movement, the cognitive assistant tracks the number that you dialed. From there, it figures out who you’re calling, and crosschecks its database for previous conversations, calendars and photos related to that person.
It then gently reminds you — through an earpiece, speakers or email — that last time you talked, your friend had just begun a new job. By scanning your texts, it notes that several weeks ago she had booked a tattoo appointment — her first! — that was now coming up.
All of this information sits primed and ready — all before your friend picks up — just in case you want a friendly reminder.
How — and if — you want the data delivered is up to you, stresses Kozloski. That’s the thing: the cognitive assistant would only pitch in when you want it to.
“It would be very annoying if it were continually interrupting you,” he said.
The assistant could come with a preset threshold for jumping in. For example, it could detect pauses in your speech or actions, and through machine learning, understand the “tells” of when you’re confused. This data helps the assistant automatically adjust its threshold.
Direct human feedback would also contribute to the assistant’s accuracy, allowing a truly personalized experience.
By catering to the individual’s cadence or idiosyncrasies it could build a better model of what’s normal for the user, and what’s not, Kozloski said.
An obvious application for the assistant would be for people suffering from memory loss.
“In early stages of Alzheimer's, a person can often perform everyday functions involving memory,” wrote Kozloski in the patent.
As memory loss becomes more severe, the person would begin to experience the devastating results of cognitive breakdown, he explains. They won’t be able to take their medication on time. They might miss important appointments. They may even lose the ability to interact with other people, to dress themselves or cook meals.
http://i0.wp.com/singularityhub.com/wp-content/uploads/2016/02/google-your-brain-5.jpg?resize=300%2C215 " alt="google-your-brain-5" data-recalc-dims="1" />In these cases, a cognitive assistant would not only help the user by giving them friendly reminders, it could also monitor the person’s cognitive decline over time.
For example, are they forgetting something more frequently? Is it a memory or a motor task? Is the user straying from his or her usual routine?
The assistant could “perhaps prevent side effects of what are otherwise sort of innocuous episodes of forgetting,” said Kozloski.
Kozloski is careful to address privacy and security issues that could arise from uploading your digital self to the assistant.
“…The invention includes security mechanisms to preserve the privacy of the user or patient. For example, the system can be configured to only share data with certain individuals, or to only access an electronic living will of the patient in order to determine who should have access if the user is no longer capable of communicating this information,“ https://www.google.com/patents/US9177257 ">he writes.
The system may adopt other security measures, but for now Kozloski is focusing on the device itself.
Even if Kozloski’s idea fails, it’s easy to imagine that something similar may take its place. IBM’s cognitive assistant, combined with augmented reality, virtual reality and brain-machine interfaces suggests that we are on the fast track towards a new way of life. http://singularityhub.com/2015/10/14/forget-humans-vs-machines-its-a-humans-machines-future/ ">It’s a human+machines future.
Image Credit: http://www.shutterstock.com ">Shutterstock.com
"It is human nature and inevitable in my view that we will edit our genomes for enhancements.”
—http://time.com/4204210/craig-venter-gene-editing/ ">J. Craig Venter
This week, Kathy Niakan, a biologist working at the Francis Crick Institute in London received the green light from the UK’s Human Fertilisation and Embryology Authority to use genome editing technique http://singularityhub.com/2016/01/21/crisprcas9-genome-editing-is-a-huge-deal-but-its-just-the-tip-of-the-iceberg/ ">CRISPR/Cas9 on human embryos.
Niakan hopes to answer important questions about how healthy human embryos develop from a single cell to around 250 cells, in the first seven days after fertilization.
By removing certain genes during this early development phase using CRISPR/Cas9, Niakan and her team hope to understand what causes miscarriages and infertility, and in the future, possibly improve the effectiveness of in-vitro fertilization and provide better treatments for infertility.
The embryos used in the research will come from patients who have a surplus of embryos in their IVF treatment and give consent for these embryos to be used in research. The embryos would not be allowed to survive beyond 14 days and are not allowed to be implanted in a womb to develop further. The team still needs to have their plans reviewed by an ethics board, but if approved, the research could start in the next few months.
In an http://time.com/4204210/craig-venter-gene-editing/ ">op-ed for Time magazine, J. Craig Venter writes that the experiments proposed at the Crick Institute are similar to previous gene knockouts in mice and other species. While some results may be of interest, Venter believes, most will be inconclusive, as the field has seen in the past.
He continues, “The only reason the announcement is headline-provoking is that it seems to be one more step toward editing our genomes to change life outcomes.”
Venter’s stance on the matter of genome editing echoes that of many other scientists in the field: Proceed with caution.
In December 2015, The National Academies of Sciences, Engineering and Medicine held an International Summit on Human Genome Editing, and after several days of discussion, released a http://www8.nationalacademies.org/onpinews/newsitem.aspx?RecordID=12032015a ">statement of conclusions.
In a nutshell, the group recommended that basic and preclinical research should continue with the appropriate legal and ethical oversight. If human embryos or germline cells are modified during research, they should not be used to establish a pregnancy.
In cases of clinical use, the group underscored a difference between editing somatic cells (cells whose genomes are not passed on to the next generation) versus germline cells (whose genomes are passed on to the next generation).
Somatic cell editing would include editing genes that cause diseases such as sickle-cell anemia. Because these therapies would only affect the individual, the group recommends these cases should be evaluated based on “existing and evolving” gene-therapy regulations.
It’s worth noting that governments across the world have significantly diverse ways of handling gene-therapy regulations.
In the US, the National Institutes of Health (NIH) won’t fund genomic editing research involving human embryos. Research like Kathy Niakan’s is not illegal, as long as it is privately funded. In China, the government doesn’t ban any particular type of research, while countries like Italy and Germany are on the other side of the spectrum, where all human embryo research is banned.
The International Summit on Genome Editing concluded that today it would be “irresponsible to proceed with any clinical use of germline editing” until we have more knowledge of the possible risks and outcomes of doing so.
In spite of that, the group also concluded that as “scientific knowledge advances and societal views evolve, the clinical use of germline editing should be revisited on a regular basis.” Similarly, Venter writes of the need for the scientific community to gain better understanding of the “software of life before we begin re-writing this code.”
While the “proceed with caution” message from scientists is loud and clear, the age of programmable biology seems to be getting closer and closer.
Between Venter’s statement that it is inevitable that we will edit our genomes for enhancements and the suggestion that human germline editing should be ‘revisited’ as opposed to banned, it seems even the scientific community is assuming a future which includes human genome editing.
So, where do we go from here?
This brave new future seems equal parts exciting, frightening — and inevitable. At this stage, more research is critical — so when the time comes to rewrite the software of life, we do so with wisdom.
Image Credit: http://www.shutterstock.com ">Shutterstock.com
Last week, news broke that the holy grail of game-playing AI—the ancient and complex Chinese game Go—was cracked by AI system AlphaGo.
AlphaGo was created by Google’s DeepMind, a UK group led by David Silver and Demis Hassabis. Last October the group invited three-time European Go champion Fan Hui to their office in London. Behind closed doors, AlphaGo defeated Hui 5 games to 0—the first time a computer program has beaten a professional Go player.
http://googleresearch.blogspot.com/2016/01/alphago-mastering-ancient-game-of-go.html " target="_blank">Google announced the achievement in a blog post, calling it one of the “grand challenges of AI” and noting it happened a decade earlier than experts predicted.
A brief history of AI vs. human game duels
AI battling humans in games has been a long-standing method for testing the intelligence of a computer system.
In 1952, the first computer mastered the classic game tic-tac-toe (or http://www.pong-story.com/1952.htm " target="_blank">noughts and crosses) followed by https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1208/1109 " target="_blank">checkers in 1994. In 1997, IBM’s supercomputer https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) " target="_blank">Deep Blue cracked the game of chess when it beat world chess champion Garry Kasparov. A decade and a half later in 2011, IBM’s supercomputer Watson used advanced natural language processing to https://www.youtube.com/watch?v=WFR3lOm_xhE " target="_blank">destroy all human opponents at Jeopardy.
In 2014, http://googleresearch.blogspot.com/2015/02/from-pixels-to-actions-human-level.html " target="_blank">a DeepMind algorithm taught itself to play dozens of Atari games. The system combined deep neural networks and reinforcement learning to turn raw pixel inputs into real-time actions—and pretty solid gaming skills.
http://www.wired.com/2014/05/the-world-of-computer-go/ " target="_blank">But humans still reigned at Go and were expected to for a while yet.
Why we thought Go required human intellect
With more potential board configurations than the number of atoms in the universe, Go is in a league of its own in terms of game complexity—and because of its vast range of possibilities, a game that requires human players use logic, yes, but also intuition.
http://i1.wp.com/singularityhub.com/wp-content/uploads/2016/02/shutterstock_120498220.jpg?resize=243%2C314 " alt="shutterstock_120498220" data-recalc-dims="1" />
https://en.wikipedia.org/wiki/Rules_of_go " target="_blank">The rules of Go are relatively simple: two players go back and forth playing black or white stones on a 19-by-19 grid. The goal is to capture an opponent’s stone by surrounding it completely. A player wins when their color controls more than 50 percent of the board.
The twist is, there are too many possible moves for a player to comprehend, which is why many experts often make their moves based on intuition.
Though intuition was thought to be a uniquely human element needed to master Go, DeepMind’s AlphaGo shows this isn’t necessarily the case.
How AlphaGo’s deep learning works
The sheer size of the search tree in Go—meaning all possible moves available in a game—makes it far too large for even computational brute force. So, DeepMind designed AlphaGo’s search algorithm to be more human-like than its precursors.
DeepMind’s David Silver says “[the algorithm is] more akin to imagination.”
Prior Go algorithms used a powerful search technique called https://en.wikipedia.org/wiki/Monte_Carlo_tree_search " target="_blank">Monte-Carlo tree search (MCTS), where a random sample of a search tree is analyzed to determine the next best moves. AlphaGo combines MCTS with two https://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks " target="_blank">deep neural networks—a machine learning method that has recently taken AI by storm—each made up of millions of neuron-mimicking connections to help analyze possible moves.
AlphaGo simulates the remainder of the game, but uses the data from the two neural networks to narrow and guide its search. After simulation, AlphaGo selects its move.
The head of DeepMind, Demis Hassabis, told http://www.wired.com/2016/01/in-a-huge-breakthrough-googles-ai-beats-a-top-player-at-the-game-of-go/ " target="_blank">Wired, “The most significant aspect of all this…is that AlphaGo isn’t just an expert system, built with handcrafted rules. Instead, it uses general machine-learning techniques to win at Go.”
Using two deep learning networks to train each other
The group at Google started by training AlphaGo on 30 million human moves, until the neural network could predict the next human move with 57 percent accuracy.
But for AlphaGo to go beyond simply mimicking human moves, the two neural networks played thousands of games against each other, learning new strategies and how to identify patterns on their own—a trial-and-error process called https://en.wikipedia.org/wiki/Reinforcement_learning " target="_blank">reinforcement learning. (The same process used to train DeepMind’s Atari AI.)
Notably, it isn’t just the software. AlphaGo’s computational power is vast, amply drawing on Google’s cloud computing might. And even so, its ability is nowhere near that of the human mind. Hassabis emphasized that Go is a closed game with a defined goal, and therefore does not represent even a microcosm of the real world.
So, no, we do not need to have an existential crisis over AlphaGo.
A decade earlier than predictions
In May of 2014, Wired published a feature titled, “http://www.wired.com/2014/05/the-world-of-computer-go/ " target="_blank">The Mystery of Go, the Ancient Game That Computers Still Can’t Win,” where computer scientist Rémi Coulom estimated we were a decade away from having a computer beat a professional Go player. (To his credit, he also said he didn’t like making predictions.)
https://en.wikipedia.org/wiki/Crazy_Stone_(software) " target="_blank">Crazy Stone, Coulom’s computer program referenced in the article, used the Monte Carlo tree search technique, but unlike AlphaGo, it lacked the reinforcement learning of two separate neural networks training each other. That is, Crazy Stone was not able to teach itself to identify new patterns as it progressed in the game.
Added to quick and dramatic breakthroughs in other classically thorny AI problems http://singularityhub.com/2015/05/17/how-artificial-intelligence-is-primed-to-beat-you-at-wheres-waldo/ ">like image recognition, AlphaGo is further testament to the technique’s power.
“Deep learning is killing every problem in AI,” Coulom recently http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234 " target="_blank">told Nature.
Next up for AlphaGo is a match this March in Seoul against legendary Go world champion, Lee Sedol, who in the 2014 http://www.wired.com/2014/05/the-world-of-computer-go/ ">Wired article was quoted saying, “There is chess in the western world, but Go is incomparably more subtle and intellectual.”
After AlphaGo’s recent defeat of Hui, the algorithm’s abilities are crystal clear. Beating Lee, however, would be equal to IBM’s Deep Blue beating Garry Kasparov in 1997—only AlphaGo would do so using the next generation in machine learning.
“We’re pretty confident,” http://www.nature.com/news/google-ai-algorithm-masters-ancient-game-of-go-1.19234 " target="_blank">Hassabis says.
Image source: http://www.shutterstock.com/index-in.mhtml " target="_blank">Shutterstock
At Singularity University, space is one of our Global Grand Challenges (GGCs). The GGCs are defined as billion-person problems. They include, for example, water, food, and energy and serve as targets for the innovation and technologies that can make the world a better place.
You might be thinking: We have enough challenges here on Earth—why include space?
We depend on space for telecommunications, conduct key scientific research there, and hope to someday find answers to existential questions like, “Are we alone in the universe?”. More practically, raw materials are abundant beyond Earth, and human exploration and colonization of the Solar System may be a little like buying a species-wide insurance policy against disaster.
Space resources and technologies are rapidly accelerating.
We need to start thinking about safe and equitable use of those resources now for the benefit of humanity and our possible future as a multi-planetary species.
In this four part GGC series, we’ll walk you through some important space issues facing us now:
Part 1: Our Home Among the Stars
Part 2: The Race to the Moon and Mars
Part 3: Asteroid Detection and Mining
Part 4: Finding Extraterrestrial Life
The following article was curated from excerpts of previously published Singularity Hub articles. Special thanks to http://singularityhub.com/author/jdorrier/ ">Jason Dorrier, http://singularityhub.com/author/aberman/ ">Alison Berman, and http://singularityhub.com/author/sscoles/ ">Sarah Scoles for their works quoted below.
Part 1: Our Home Among the Stars
Like many industries, aerospace is being disrupted by exponential technologies that are making it faster, cheaper, and easier to launch rockets and satellites, and even manufacture in space.
Here are three key space technologies to keep an eye on today.
1. 3D Printing Is Taking Off (Literally)
http://singularityhub.com/2016/01/08/why-3d-printing-will-be-a-key-technology-in-the-next-space-race/ ">3D printing is helping to make better rockets faster and cheaper.
"NASA recently announced that they test fired a research rocket engine. Nothing special about that—other than the fact said engine was http://www.nasa.gov/centers/marshall/news/news/releases/2015/piece-by-piece-nasa-team-moves-closer-to-building-a-3-d-printed-rocket-engine.html ">75 percent 3D printed parts.
As industrial 3D printing has moved from prototyping to actually manufacturing finished products, the aerospace industry has become an avid early adopter. Although in many industries mass production techniques still make economic sense—for the ultra-precise, almost bespoke parts in rockets? 3D printing is a great fit.
Last year, GE showed off a scaled down http://singularityhub.com/2015/05/15/watch-ges-3d-printed-jet-engine-fire-at-33000-rpm/ ">3D printed jet engine firing at 33,000 RPM. SpaceX’s recent recovery of a Falcon 9 rocket was not only spectacular, but the rocket has http://www.spacex.com/news/2014/07/31/spacex-launches-3d-printed-part-space-creates-printed-engine-chamber-crewed ">long used 3D printed parts too. And NASA’s latest trial shows 3D printing is set to become an even bigger part of rocket engine manufacturing.
3D printing is well suited to aerospace applications for more than just the fact it can be easily customized. 3D printed components typically have fewer parts in need of joining and assembling. The turbopump in NASA's recent engine test, for example, had 45% fewer parts than a traditional design.
3D printing also speeds up research and development. Engineers can design a part, print it, test it, find flaws, fix them, and repeat. It takes less time to get from initial design to final part than using traditional casting and the quality is often better.
All that saved time not only accelerates progress but also reduces cost."
http://singularityhub.com/2015/10/30/lowes-joins-made-in-space-to-bring-first-commercial-grade-3d-printer-to-space/ ">3D printing's real value may only become clear once we leave Earth.
"Getting to space is only half the battle—learning to live, build, and expand is next. It’s the second half of the equation that http://www.madeinspace.us/ ">Made In Space, an aerospace manufacturing company that’s developed the first low-gravity 3D printer to operate in space, is tackling. The company hit on the idea during Singularity University’s 2010 Graduate Studies Program and made history by sending the http://singularityhub.com/2014/09/24/3d-printer-delivered-to-space-station-launches-new-era-of-space-manufacturing/ ">first 3D printer to space [in late 2014].
The implications of the ability for humanity to manufacture in space are vast and piqued the curiosity of Lowe’s Innovation Lab, also a Singularity University partner through the SU Corporate Labs program. http://singularityhub.com/2015/10/30/lowes-joins-made-in-space-to-bring-first-commercial-grade-3d-printer-to-space/ ">Lowe’s Innovation Lab announced last October that they’ll partner with Made In Space to launch the first-ever commercial grade off-world 3D printer into space to the International Space Station (ISS) by early 2016.
Enabling the ISS with the capability for commercial grade 3D manufacturing is colossal. It solves the massive obstacle the ISS faces during repairs by allowing the crew to http://singularityhub.com/2015/06/25/the-story-behind-the-first-3d-printed-wrench-in-space/ ">print new parts and tools needed on-demand, instead of depending on resupply missions from the ground."
2. Reusable Rockets Are a Big Deal
http://singularityhub.com/2015/12/22/bullseye-watch-spacex/ ">The cost to launch stuff into orbit is astronomical. Cliché, but true.
"Rockets are seriously complicated and costly machines—yet, they’re currently used once and thrown away. SpaceX is attempting to recover, refurbish, and reuse its rockets to radically slash launch costs.
To be a multi-planetary species, SpaceX founder Elon Musk believes we have to develop fully reusable rockets. If every rocket launched 1,000 times, instead of just once, capital costs would plummet from $50 million to $50,000 per launch (not counting operational expenses) and could drive per pound launch costs down 100-fold."
http://singularityhub.com/2015/12/22/bullseye-watch-spacex/ ">SpaceX recently recovered its first rocket…can they do it again?
"When SpaceX first began launching, steering, and landing rockets a few years ago, the http://singularityhub.com/2013/08/19/musk-to-be-a-multi-planetary-species-we-need-to-develop-reusable-rockets/ ">dream of reusable rockets began to seem less dreamlike. But http://singularityhub.com/2015/01/18/top-5-spacex-videos-in-quest-for-reusable-rockets/ ">those first tests weren't much more than a few hundreds of meters high. The real test would be a live launch delivering a payload to orbit.
[Last December] SpaceX successfully recovered the first stage of a Falcon 9 rocket after boosting a payload of satellites into orbit. That's a big deal for the future of space exploration."
"And this is just the beginning. SpaceX still has to repeat the feat, learn to economically recondition rockets (because launch is a violent affair), make the second stage reusable (which is a smaller fraction of the overall cost and also more difficult to bring home), and finally, establish a track record of safe launches of reconditioned rockets for prospective customers.
It's an exciting time for space exploration, and though challenges remain, the future looks bright."
[Editor's note: Jeff Bezos and his company Blue Origin have now not only recovered a rocket but http://gizmodo.com/blue-origin-relaunched-the-rocket-they-landed-in-novemb-1754664540 ">reconditioned, relaunched, and recovered it again. The rocket is smaller and suborbital—the altitude and speeds aren’t comparable to the Falcon 9—but it’s still great news for space travel.]
3. Personal Satellites Are Making Space More Widely Accessible
http://singularityhub.com/2015/11/06/a-personal-satellite-for-christmas-reaching-space-is-becoming-relatively-cheap/ ">Satellites are shrinking, and so is the cost to build them and shoot them up into orbit.
"The newest orbiter on the scene is called http://www.thumbsat.com/ ">ThumbSat. For $20,000, the company provides a “Mission Builder” application and then the hardware that becomes your personal satellite. As if you were building a Squarespace website, you can pull together different components that will allow you to design a tiny satellite full of measuring, image-capturing devices.
Once you have your brilliant idea for what you’d pay $20,000 to study, ThumbSat handles all the paperwork. They set your sat up with a launch vehicle; track its position with their http://www.thumbsat.com/thumbnet ">ThumbNet set of tracking stations; and procure the radio-transmission licenses necessary to send data from your device back down to Earth.
Part of the problem with small satellites in the past was the “getting to space” part. To be viable as the primary reason for a rocket launch, the number of picosatellites has to be huge. And because of that, the “secondary” payload market becomes the primary option—in which launches meant mostly for bigger projects let their smaller cousins squeeze into some empty spaces.
But sending rockets to space is becoming ever more common, and thus, as economics promises, cheaper. Private companies like SpaceX are expanding both their reach and their technology. And as they grow, so will opportunities to plunk homemade satellites onboard.
The personal satellite revolution is also brought to you by shrinking computer circuits. The http://www.interorbital.com/interorbital_06222015_002.htm ">TubeSat, available for $8,000 (including the launch, like ThumbSat) from Interorbital Systems, provides the same reconfigurable power through printed circuit boards as a microcomputer. With it, you can take video of Earth from space, measure our planet’s magnetic field, track animals migrating miles below, and monitor the spacey environment just above Earth.
While $8,000-$20,000 is still a lot for an individual to pay to send something to space, the cost has dropped with the size and will likely continue to. Maybe one day, kids will get femtosat DIY kits—and access to the universe—for their birthdays."
That wraps up Space and Technology Review Part 1. Stay tuned for Parts 2 through 4.
Banner Image Credit: https://www.nasa.gov/image-feature/good-morning-from-the-international-space-station ">NASA
Sitting on Dr. Peter Liacouras’s desk is a razor, a stick of deodorant, and a partially built prosthetic arm. Behind him, several 3D printers buzz away, creating contraptions in plastic, nylon, and titanium. Today he is working on creating a custom device that will allow a wounded service member to get ready in the morning by themselves. We take it for granted, but this can be a daunting and consuming task for those who have lost a limb. As the director of service for the 3D Medical Applications Center at Walter Reed National Military Medical Center, Liacouras uses cutting-edge technologies to improve people’s quality of life by pushing the fields of prosthetics and orthotics forward.
His goal is simple: to allow wounded service members to do the things that they used to do before getting injured. A provider recently asked if he would like to help an injured veteran play ice hockey again, and he gladly accepted. To do this he will have to study the biomechanics of the activity, examine how body weight shifts while skating, create anatomical models with a CT scanner, and then involve his whole team to brainstorm ways to give each individual patient the best possible outcome. As Liacouras detailed, these procedures allow for the creation of a customized treatment for each service member, “in amputee care we’ve created all sorts of different devices that allow them to go fishing again, rock climbing again, skating again, kayaking again. These are a different type of patient from the past; these are young, active patients that like to take part in complex activities. And this has really filled that gap of where normal prosthetics stop, and specialty prosthetics start.”
Two decades ago, much of this would not have been possible—the technology just wasn’t available. 3D printing, also known as “additive manufacturing,” has come a long way since Walter Reed first started experimenting with it in 2003. The printers themselves have become cheaper, faster, and better able to handle stronger materials. The technology’s adoption in healthcare has taken off. Batteries have gotten smaller, and equipment lighter. Even the components inside the prosthetics now include microprocessors and advanced sensors. But more important than the technology itself, it’s what has been done with it that has pushed the boundaries of what anyone thought was originally possible.
“The majority of our active duty patients this week are in Colorado skiing as part of their therapy,” Dave Laufer, the director of Orthotic & Prosthetic Services at the Department of Rehabilitation, proudly told me. He has seen the field of prosthetics change and grow over time from a manual, artisan process to one that is becoming inundated with technology. While some of this is “technology for technology’s sake,” when digital methods do work effectively they can be extremely helpful. At best, these tools can make prosthetics less apparent and more intuitive. Microprocessors in artificial knees, for example, have allowed the injured to stand with minimal effort—a game changer in the field. 3D printing can allow for prosthetics to be comfortable and symmetrical to existing limbs, which can play a major factor in whether people actually use them.
These advances mark what Laufer calls “an unintended good consequence from an unfortunate occurrence.” More Americans are returning home from combat having survived severe wounds and injuries than at http://www.pbs.org/coming-back-with-wes-moore/about/facts/ ">any point in the past. Since the conflicts in Iraq and Afghanistan began in 2001, there has been an unprecedented amount of focus and attention to the plight of the disabled. In the past, disagreements with Medicare and private insurance companies over what should be reimbursable had led to some stagnation. But recently, the US Department of Defense has bolstered the industry with their search to find optimal solutions for wounded service members. There are more than a million visits to Walter Reed every year; it’s one of the largest treatment centers in the country. Whatever new solutions are developed will have to thrive under its intense requirements.
Taken as a whole, this marks a fundamental shift in the way treatment is done from disability management to something more like a sports medicine model, where patients are treated like professional athletes. Army Captain Nicole Brown, officer-in-charge of the MATC, explained that the main goal for a service member used to be that they could “get around in a wheelchair all day, now we have goals of patients running in marathons.” Treatment is tailored to each patient’s personal needs and can be adjusted on a day-to-day basis depending on what the therapist sees. Advancements in surgical techniques have been helpful, but what continues to amaze her is the sheer will of the service members she treats—“their fortitude, their ability to keep going in the face of adversity…they don’t take no for an answer.” It’s the patients who are setting the terms of their care—they are showing the industry how far it can go by shattering the limits of what they were “supposed” to be able to do.
This transformation has opened the door for experts and entrepreneurs to contribute as well. Late last year, Walter Reed hosted a pilot study to test a new type of 3D printed prosthetic leg cover that is able to withstand tough physical conditions. The product was the result of a partnership between https://www.youtube.com/watch?v=3KARtQa0ULI ">UNYQ—a San-Francisco-and-Seville-based startup that focuses on making prosthetics fashionable and functional, led by bionics pioneer Eythor Bender—and http://mcopro.com/ ">Medical Center Orthotics & Prosthetics—a Maryland-based Department of Defense contractor that collaborates with leading technology manufacturers to create solutions for highly active patients, led by Mark McVicker, who has spent 20 years working in the field. McVicker sees the leg covers as the start of a series of products that aim to streamline the processes of helping an amputee to recover.
More than just a physical product, the leg covers they introduced marked a breakthrough of sorts. To begin with, the process of fitting the patient for a prosthetic leg is done with UNYQ’s smartphone app. All that’s required is eight photos for an above knee amputee, and UNYQ is able to use photogrammetry software to create a CAD file that can go straight to the printer. What was once done by a team of people using their eyes can now be done more precisely by machine in minutes. Future versions of the product can be easily outfitted with sensors to collect information about how long its been worn, how its being used, and how it can be improved—which the company is already doing with their new line of http://www.prweb.com/releases/2016/01/prweb13151869.htm ">award-winning scoliosis braces. (Watch Eythor Bender discuss prosthetics challenges and solutions below.)
To everyone’s detriment, the emotional and aesthetic aspects have long been ignored in prosthetics, according to Bender. He believes what’s required in the industry is a change in mindset, where personalization is understood to be just as significant as function. What the industry has produced so far has been cold and clinical, which greatly impacts how amputees feel about themselves. He hopes to do this by creating a digitized, repeatable process that can free up the time of doctors and clinicians so they can focus on their patients — as he puts it, “less worry about making the leg and more focus on getting people up and walking as fast as possible.” For the first time, recovering service members and veterans will be able to participate in the creative process of making their own prosthetic limbs by choosing the color, design, and materials.
What Bender and others innovating in the space are doing is creating choice where none existed before. The service members and veterans who have sacrificed their health and wellbeing for their country are once again showing us the strength of the human spirit and how far people can go when enabled by technology. Through perseverance, they are leading a complete change in how we process, handle, and understand disability that will improve lives for generations to come.
Banner Image Credit: http://unyq.com/press/ ">UNYQ
Virtual Workplaces Will Liberate Talent, Dissolve Borders, and Rewrite the Source Code of Innovation
Innovation is the currency of the modern world. Naturally, we want to figure out how innovation happens and how to get more of it. The current recipe is to gather smart, passionate people together in a city, add a dash (or mound) of investor capital, and let the magic happen.
Why cities? Ideas thrive when they’re easily exchanged, combined, discarded, and built upon. If ideas live in people, then people need to be near one another to most efficiently swap ideas. Population is densest in cities, so in theory, ideas and innovation should be too.
This has produced incredible results. But it’s not without drawbacks and limitations.
Innovation hubs, like San Francisco, face rising rents, social unrest, and declining diversity as lower-income professionals flee to more affordable locales. Further, borders and immigration restrictions prevent talented people from traveling between countries, and therefore the cities that drive the information economy.
http://i2.wp.com/singularityhub.com/wp-content/uploads/2016/02/virtual-reality-unleash-global-workforce-1.jpg?resize=300%2C300 " alt="virtual-reality-unleash-global-workforce-1" data-recalc-dims="1" />There’s a common thread here: The past few centuries (even millennia) have been about making the most of physical space to let innovation thrive. But we may be reaching physical and political limits. Thriving cities force tough trade-offs, and immigration policies won’t change overnight. Solving these problems could dramatically increase the global rate of innovation by letting the top people work on the most important problems, no matter where they live.
So, what’s the answer?
The digital workplace faces none of these problems. Space is unlimited. Physical location is less relevant. Traveling between areas takes seconds. Political borders are blurred. No passports or visas are needed. If it’s proximity ideas want, then this century’s equivalent of the city is online.
To date, however, we’ve been unable to replicate the intangible (yet obvious) value of face-to-face interaction. This is a prime hurdle to bringing work online—but it’s about to come down.
The Virtual Reality Office
The dream of a fully distributed team has been around for a long time, and many companies are successfully running remote teams today. Without fail, though, these teams have a common list of complaints. In distributed offices, it’s harder to get employees to maintain focus, and company culture inevitably takes a hit. Worst of all, it’s difficult to replicate the spontaneous interactions that are so critical to empowering high-performance teams.
Even with modern communication tools like Slack and Skype, we aren’t making distributed teams as socially and intellectually expressive as those in traditional offices. With all of the advances in communications technology in recent decades, at the end of the day, most great ideas come from a group of people working together in a room.
There’s nothing else that even comes close.
At least there wasn’t, until now. The virtual reality revolution is in the early stages of providing us with the tools necessary to create global collaborative networks that combine the flexibility of remote work with the traditional benefits of a real-life office.
http://i0.wp.com/singularityhub.com/wp-content/uploads/2016/02/virtual-reality-unleash-global-workforce-11.jpg?resize=300%2C200 " alt="virtual-reality-unleash-global-workforce-11" data-recalc-dims="1" />So why is virtual reality different? After all, we can email, IM, or video chat with our coworkers from anywhere today. Has anything really changed?
It all boils down to one concept: presence.
Presence is what happens when your brain is convinced on a subconscious level that the virtual scene you are inhabiting is real. Presence is an extremely powerful sensation, and understanding presence is the key to understanding the current hype behind virtual reality.
With social presence, you really feel like you are in the same room as the other person. With social presence, you can finally communicate across distances with the ease and clarity of face-to-face communication. http://www.forbes.com/sites/augustturak/2010/12/17/the-business-of-nonverbal-communication-how-signals-reflect-your-brand/#63919acb2eec ">Nonverbal communication, gestures, and subtle facial expressions are all critical to communication but have been difficult to express digitally. With social presence, we can make digital communication as natural as the real world.
So how much is the power of social presence worth? At least $2 billion, according to Mark Zuckerberg, who bought Oculus to help ensure that Facebook is competitive on the leading social platform of the future.
By harnessing the power of social presence, we’ll soon be able to create virtual reality offices and spaces where people can meet, talk, work, and debate just like they were together in real life. The ephemeral barriers to communication that exist for teleworking will melt into the background, and we’ll be able to seamlessly communicate with one another.
That’s only the beginning—it really gets exciting once we start incorporating elements such as 3D data visualization and digital simulations into our offices.
If virtual reality can nail social presence, we’ll open up the global talent pool, and what happens next may be as world-shaking as when humans settled in early cities.
The Global Talent Pool
“The main lesson … is that innovation is usually a group effort, involving collaboration between visionaries and engineers, and that creativity comes from drawing on many sources. Only in storybooks do inventions come like a thunderbolt, or a lightbulb popping out of the head of a lone individual in a basement or garret or garage.”—Walter Isaacson, The Innovators
As our world dives headfirst into a world of constant innovation and technological disruption, it is impossible to overstate the power of small, driven teams. Software is eating the world, and start-ups are building the best, most innovative software. Relatively small teams with big-time software, like Uber, Airbnb, and Facebook, are rewriting the source code of our daily lives.
We now live in a world where a team running on http://www.fastcompany.com/3037542/productivity-hack-of-the-week-the-two-pizza-approach-to-productive-teamwork ">two pizzas can challenge a Fortune 500 company for market dominance.
A great paradox of the modern world is that while we can communicate better than ever before across great distances, the top teams are placing increasing importance on personal interaction. Simple transactional work can be done easily over email or videoconferencing, but true collaboration still requires getting a group of innovative thinkers together in a room.
And what’s more, the best companies and organizations all want to be in the same place, which is why companies are flocking to San Francisco, Boston, and Washington, DC. These innovation centers are crucial to the modern economy. It’s not an overstatement to say the modern technological revolution wouldn’t have happened without them.
http://i0.wp.com/singularityhub.com/wp-content/uploads/2016/02/virtual-reality-unleash-global-workforce-13.jpg?resize=300%2C200 " alt="virtual-reality-unleash-global-workforce-13" data-recalc-dims="1" />But there’s a huge problem.
No matter how many smart and talented people you gather together in any particular place, you won’t have the vast majority of top performers there. That means that every company in the world is being massively hamstrung by not having access to the global talent pool. While we’re starting to unlock this problem with the “gig economy” and sites like Elance, coordination with remote team members can prove more difficult than the task that’s being worked on.
Now imagine that there’s a way to eliminate the physical distance between everyone in the world. A programmer from London, a graphic designer from Shanghai, and a user-experience engineer from Mexico City get ready for work and find themselves in the same office without having to commute. We would see an absolute explosion in innovation and productivity as the best people from around the world could form teams without regard for distance.
The virtual reality office will make the entire world one large innovation hub with no boundaries or spatial limitations.
Rewriting the Innovation Source Code
Our current innovation and managerial framework is a technology like any other, and it is about to be disrupted. And just in time. In many ways we’ve innovated ourselves into a corner: industrialization has caused global warming, biotech terrorism presents a new and chilling threat, and of course, the information economy promotes excessive gentrification and immigration stresses.
Perhaps soon we will be holding VR interviews with anonymized avatars, to prevent http://www.fastcompany.com/3036627/strong-female-lead/youre-more-biased-than-you-think ">unconscious bias while hiring. Or we will create international war rooms to fight the next infectious disease outbreak. At the very least we’ll be wasting less time sitting in traffic.
The virtual reality office is not the silver bullet to solve all of these global issues, but it does offer us a new and unique opportunity to create tools in the fight for a greener, safer, and more equitable world. It not only can help us iron out the flaws in our current system, but can allow us to create new organizations that move at lightning speed due to their access to the global talent network.
The Internet brought us a new breed of organization that can harness digital tools to allow a small team to make extraordinary impacts on the world. With the rise of the virtual reality office, we could see another breakthrough on the same scale. Let’s make it happen.
Image Credit: http://www.shutterstock.com ">Shutterstock.com
In modern times, farming's gone from humanity's top job to a sliver of the economy—a trend that continues today as fewer young people choose to farm. For every farmer http://www.yesmagazine.org/issues/good-health/if-there-are-no-new-farmers-who-will-grow-our-food-20160201 ">under 35 there are 6 over 65, and a quarter of today's US farmers will retire by 2030. But we all still have to eat.
A recent Yes! Magazine article wonders: "http://www.yesmagazine.org/issues/good-health/if-there-are-no-new-farmers-who-will-grow-our-food-20160201 ">If there are no new farmers, who will grow our food?"
Robots, of course.
Though the article argues we're in dire need of more human farmers, it forgets to mention one of the primary drivers of modern agriculture—automation. It now takes far, far fewer farmers to supply food for the rest of us. Along with reduced crop failure and improvements in how much we can grow per acre of soil, we've also steadily swapped the sweat of our brow for the https://en.wikipedia.org/wiki/List_of_agricultural_machinery ">oiled whir of machinery.
And that trend continues. One recent example? http://www.theguardian.com/environment/2016/feb/01/japanese-firm-to-open-worlds-first-robot-run-farm ">Japan's new automated indoor lettuce farm.
Growing lettuce isn't the flashiest occupation, but it gets a little flashier when you do it with the press of a button. http://spread.co.jp/en/ ">Japanese company Spread is expanding its indoor farm and more fully automating it.
People will plant the seeds, but a robotic system takes it from there. Conveyor belts equipped with robot arms will water, trim, re-plant, and harvest crops. Sensors will monitor humidity, CO2, light, and temperature—automatically adjusting the indoor climate to make sure the lettuce is happy.
“The seeds will still be planted by humans, but every other step, from the transplanting of young seedlings to larger spaces as they grow to harvesting the lettuces, will be done automatically,” http://www.theguardian.com/environment/2016/feb/01/japanese-firm-to-open-worlds-first-robot-run-farm ">according to JJ Price, Spread’s global marketing manager.
Compared to their current indoor farm, http://spread.co.jp/en/environment/ ">Spread's new facility aims to reduce energy costs by a third with LEDs. Automation will http://www.theguardian.com/environment/2016/feb/01/japanese-firm-to-open-worlds-first-robot-run-farm ">reduce labor costs by half. And by http://spread.co.jp/en/environment/ ">recycling 98% of their water, Spread says their pesticide-free lettuce consumes 100 times less than conventionally grown lettuce.
Once operational next year, the farm will more than double production from 21,000 heads of a lettuce a day to 50,000 a day, and they're aiming for half a million a day in five years.
While indoor farms offer a more controlled setting, http://singularityhub.com/2014/07/14/pepper-picking-soil-testing-plant-pruning-robots-are-coming-to-farms/ ">farm robots aren't limited to them.
http://www.cbsnews.com/news/farmers-reap-benefits-self-driving-tractor-technology/ ">Self-driving tractors have been in fields for years. A farmer usually has to be in the cab, but they can focus their attention elsewhere, doing business on a laptop for example. (And full autonomy is coming.) Other kinds of http://gizmodo.com/13-fascinating-farming-robots-that-will-feed-our-future-1683489468 ">farm robots abound. Robot arms can prune plants or spot and pick ripe fruit. Autonomous drones can skim fields and monitor crop health from above.
All this farm automation isn't new; it's the continuation of a long trend. Robots will take over some jobs from people, but fewer of us are choosing to farm too. If your focus is elsewhere, no problem, farm robots like these ones will make sure you still eat your greens.
Image Credit: http://www.shutterstock.com ">Shutterstock.com