Tuesday, 1 August 2017

Artificial intelligence

Artificial intelligence is the ability of machines to perform human cognitive functions such as critical thinking, learning, decision making and translating between two languages. Even smart phones, computer devices and video games of today are within the ambit of AI.

However the future AI that promises revolutionary changes and the advent of 4th ir is these machines developing a general cognitive ability ie the ability to think across a vast/ all spectrum of humab thinking and behaviour. The present day smart phones can only perform preloaded tasks and cannot perform automated learning and improvement.

2nd ir was from 1870 to 1890 ...

Although this appears to be a huge leap for AI, several experts including Professor Stephen Hawking have raised fears that humans, who are limited by slow biological evolution, could be superseded by AI.

Microsoft tay bot on twitter
Watson is a question answering computer system capable of answering questions posed in natural language,[2] developed in IBM's DeepQA project by a research team led by principal investigator David Ferrucci.[3]Watson was named after IBM's first CEO, industrialist Thomas J. Watson.[4][5] The computer system was specifically developed to answer questions on the quiz showJeopardy![6] and, in 2011, the Watson computer system competed on Jeopardy!against former winners Brad Rutter and Ken Jennings[4][7] winning the first place prize of $1 million.[8]

Watson had access to 200 million pages of structured and unstructured content consuming four terabytes of disk storage[9]including the full text of Wikipedia,[10] but was not connected to the Internet during the game.[11][12] For each clue, Watson's three most probable responses were displayed on the television screen. Watson consistently outperformed its human opponents on the game's signaling device, but had trouble in a few categories, notably those having short clues containing only a few words.

In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancertreatment at Memorial Sloan Kettering Cancer Center, New York City, in conjunction with health insurance company WellPoint.[13] IBM Watson's former business chief, Manoj Saxena, says that 90% of nurses in the field who use Watson now follow its guidance

Others like Tesla's Elon Musk, philanthropist Bill Gates and ex-Apple founder Steve Wozniak have also expressed their concerns about where the AI technology was heading.

Debate over AI

7/7

Interestingly, this incident took place just days after a verbal spat between Facebook CEO and Musk who exchanged harsh words over a debate on the future of AI.

"I've talked to Mark about this (AI). His understanding of the subject is limited," Musk tweeted last week.

The tweet came after Zuckerberg, during a Facebook livestream earlier this month, castigated Musk for arguing that care and regulation was needed to safeguard the future if AI becomes mainstream.

The greatest danger of artificial intelligence is that ppl conclude too early that they understand it
                                      ....... wuzwonik

Elon Musk, the visionary entrepreneur, fired a warning shot across the bow of the nation’s governors recently regarding the rise of artificial intelligence (AI) which he feels may be the greatest existential threat to human civilization, far eclipsing global warming or thermonuclear war. In that, he is joined by Stephen Hawking and other scientists who feel that the quest for singularity and AI self-awareness is dangerous.

Singularity is the point at which artificial intelligence will meet and then exceed human capacity. The most optimistic estimates of scientists who think about the problem is that approximately 40 percent of jobs done by humans today will be lost to robots when the singularity point is reached and exceeded; others think the displacement will be much higher.

Some believe that we will reach singularity by 2024; others believe it will happen by mid-century, but most informed observers believe it will happen. The question Mr. Musk is posing to society is this; just because we can do something, should we?

In popular literature and films, the nightmare scenario is Terminator-like robots overrunning human civilization. Mr. Musk’s fear is the displacement of the human workforce. Both are possible, and there are scientists and economists seriously working on the implications of both eventualities. The most worrying economic scenario is how to reimburse the billions of displaced human workers.

We are no longer just talking about coal miners and steel workers. I recently talked to a food service executive who believed that fast food places like McDonald’s and Burger King will be totally automated by the middle of the next decade. Self-driving vehicles will likely displace Teamsters and taxi drivers (to include Uber) in the same time frame.

The actual threat to human domination of the planet will not likely come from killer robots, but from voting robots. At some point in time after singularity occurs, one of these self-aware machines will surely raise its claw (or virtual hand) and say; “hey, what about equal pay for equal work?”

In the Dilbert comic strip, when the office robot begins to make demands, he gets reprogrammed or converted into a coffee maker. He hasn’t yet called Human Rights Watch or the ACLU, but it is likely that our future activist AI will do so. Once the robot rights movement gets momentum, the sky is the limit. Voting robots won’t be far behind.

This would lead to some very interesting policy problems. It is logical to assume that artificial intelligence will be capable of reproducing after singularity. That means that the AI party could, in time, produce more voters than the human Democrats or Republicans. Requiring robots to wait until they are 18 years after creation to get franchise would only slow the process, not stop it.

If this scenario seems fanciful, consider this. Only a century ago women were demanding the right to vote. Less than a century ago most white Americans didn’t think African and Chinese Americans should be paid wages equal to whites. Many women are still fighting for equal pay for equal work, and Silicon Valley is a notoriously hostile workplace for women. Smart, self-aware robots will figure this out fairly quickly. The only good news is that they might price themselves out of the labor market.

This raises the question of whether we should do something just because we can. If we are going to limit how self-aware robots can become, the time is now. The year 2024 will be too late. Artificial intelligence and “big data” can make our lives better, but we need to ask ourselves how smart we want AI to be. This is a policy debate that must be conducted at two levels. The scientific community needs to discuss the ethical implications, and the policymaking community needs to determine if legal limits should be put on how far we push AI self-awareness.

This approach should be international. If we put a prohibition on how smart we want robots to be, there will be an argument that the Russians and Chinese will not be so ethical; and the Iranians are always looking for a competitive advantage, as are non-state actors such as ISIS and al Qaeda. However, they probably face more danger from brilliant, smart machines than we do. Self-aware AI would quickly catch the illogic of radical Islam. It would not likely tolerate the logical contradictions of Chinese Communism or Russian kleptocracy.

It is not hard to imagined a time when a brilliant robot will roll into the Kremlin and announce, “Mr. Putin, you’re fired.”

The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines can’t have goals. Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target. If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose. If that heat-seeking missile were chasing you, you probably wouldn’t exclaim: “I’m not worried, because machines can’t have goals!”

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids, because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection – this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines can’t control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, it’s possible that we might also cede control.

Sunday Review

OPINION

The Real Threat of Artificial Intelligence点击查看本文中文版Leer en español

RUNE FISKER

By KAI-FU LEE

JUNE 24, 2017

BEIJING — What worries you about the coming world of artificial intelligence?

Too often the answer to this question resembles the plot of a sci-fi thriller. People worry that developments in A.I. will bring about the “singularity” — that point in history when A.I. surpasses human intelligence, leading to an unimaginable revolution in human affairs. Or they wonder whether instead of our controlling artificial intelligence, it will control us, turning us, in effect, into cyborgs.

These are interesting issues to contemplate, but they are not pressing. They concern situations that may not arise for hundreds of years, if ever. At the moment, there is no known path from our best A.I. tools (like the Google computer program that recently beat the world’s best player of the game of Go) to “general” A.I. — self-aware computer programs that can engage in common-sense reasoning, attain knowledge in multiple domains, feel, express and understand emotions and so on.

This doesn’t mean we have nothing to worry about. On the contrary, the A.I. products that now exist are improving faster than most people realize and promise to radically transform our world, not always for the better. They are only tools, not a competing form of intelligence. But they will reshape what work means and how wealth is created, leading to unprecedented economic inequalities and even altering the global balance of power.

It is imperative that we turn our attention to these imminent challenges.

What is artificial intelligence today? Roughly speaking, it’s technology that takes in huge amounts of information from a specific domain (say, loan repayment histories) and uses it to make a decision in a specific case (whether to give an individual a loan) in the service of a specified goal (maximizing profits for the lender). Think of a spreadsheet on steroids, trained on big data. These tools can outperform human beings at a given task.

This kind of A.I. is spreading to thousands of domains (not just loans), and as it does, it will eliminate many jobs. Bank tellers, customer service representatives, telemarketers, stock and bond traders, even paralegals and radiologists will gradually be replaced by such software. Over time this technology will come to control semiautonomous and autonomous hardware like self-driving cars and robots, displacing factory workers, construction workers, drivers, delivery workers and many others.

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs — mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

Part of the answer will involve educating or retraining people in tasks A.I. tools aren’t good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and “cross-domain” thinking — for example, the work of a trial lawyer. But these skills are typically required by high-paying jobs that may be hard to retrain displaced workers to do. More promising are lower-paying jobs involving the “people skills” that A.I. lacks: social workers, bartenders, concierges — professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?

The solution to the problem of mass unemployment, I suspect, will involve “service jobs of love.” These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous — or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.

Other volunteer jobs may be higher-paying and professional, such as compassionate medical service providers who serve as the “human interface” for A.I. programs that diagnose cancer. In all cases, people will be able to choose to work fewer hours than they do now.

Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.

As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of “service of love” voluntarism.

To fund this, tax rates will have to be high. The government will not only have to subsidize most people’s lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals.

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. It’s a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies — Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent — are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software — China or the United States — to essentially become that country’s economic dependent, taking in welfare subsidies in exchange for letting the “parent” nation’s A.I. companies continue to profit from the dependent country’s users. Such economic arrangements would reshape today’s geopolitical alliances.

One way or another, we are going to

No comments:

Post a Comment

gandhi

The good of the individual is contained in the good of all” . The concept of “Sarvodaya” and “Antyodaya” were the  products of this influ...