Showing posts sorted by date for query facebook. Sort by relevance Show all posts
Showing posts sorted by date for query facebook. Sort by relevance Show all posts

Saturday, April 01, 2023

1: Sundar

Google C.E.O. Sundar Pichai on Bard, A.I. ‘Whiplash’ and Competing With ChatGPT “Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly.” ........ This transcript was created using speech recognition software. ....... as of last week, Bard, Google’s effort at building consumer-grade AI, is out in the world. ......... So last week, we talked about Google’s new chat bot called Bard, which is supposed to be their answer to ChatGPT and some of these other generative AI chat bots ........ the reaction among the public to Bard so far has been pretty lukewarm. ......... Google certainly had a dominant position in AI research for many years. They came out with this thing, the Transformer, that revolutionized the field of AI and created the foundations for ChatGPT and all these other programs. ......... And they got sort of hamstrung by a lot of — to hear people inside Google tell it — big company politics and bureaucracy. And I think it’s safe to say that they got sort of upstaged by OpenAI. ......... they are more threatened than they have been in a very long time........ Google has been a relatively conflict-averse company for the past half decade-plus. They don’t like picking fights. If they can just keep their heads down, quietly do their work, and print money with a monopolistic search advertising business, they’re happy to do it. ......... they have to somehow figure out, how do we capitalize on generative AI without destroying our own search business? .......... Google plays a huge role in my life. That’s where my email is. That’s how I get around town. It’s how I waste hours of my life on YouTube. ......... one way to get really good responses out of these AI chat bots is to prime them first. And one way to prime them is to use flattery. So instead of just saying, write me an email, you say, you are an award-winning writer. Your prose is sparkling. Now write me this email. ........ we put out one of our smaller models out there, what’s powering Bard. And we were careful. ....... we are going to be training fast. We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding. It can answer math questions better. So you will see progress over the course of next week. .............. I don’t want it to be just who’s there first, but getting it right is very important to us. .......... The thing that is different about Bard compared to some of these other chat bots is that it’s connected to Google. ........ If you let me, I would plug Bard into my Gmail right now ......... You can go crazy thinking about all the possibilities, because these are very, very powerful technologies. ........... You can kind of give it a few bullets, and it can compose an email. ......... The enterprise use case is obvious. You can fine tune it on an enterprise’s data so it makes it much more powerful, again with all the right privacy and security protections in place. ........... in search, we have had to adapt when videos came in. ........ So for example, in Bard already, we can see people look for a lot of coding examples, if you’re developers. I’m excited. We’ll have coding capabilities in Bard very soon, right? And so you just kind of play with all this, and go back and forth, I think. Yeah............ So in September of last year, you were asked by an interview who Google’s competitors were. And you listed Amazon, Microsoft, Facebook, sort of, all the big companies — TikTok. One company you did not mention in September was OpenAI. And then, two months after that interview, ChatGPT comes out and turns the whole tech industry on its head ........ ChatGPT — you know, credit to them for finding something with a product market fit. .......... it’s a bit ironic that Microsoft can call someone else an 800-pound gorilla, given the scale and size of their company. ......... I would say we’ve been incorporating AI in search for a long, long time. .......... we literally took transformer models to help improve language understanding and search deeply. And it’s been one of our biggest quality events for many, many years. ......... search is where people come because they trust it to get information right. ........... we are definitely working with technology, which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so I think it’s very important that we are all responsible in how we approach it. ........

I did not issue a code red

........... Sergey has been hanging out with our engineers for a while now. ....... And he’s a deep mathematician and a computer scientist. So to him, the underlying technology — I think if I were to use his words, he would say it’s the most exciting thing he has seen in his lifetime. So it’s all that excitement, and I’m glad. They’ve always said, call us whenever you need to, and I call them. ............. when many parts of the company are moving, you can create bottlenecks, and you can slow down. ......... AI is the most profound technology humanity will ever work on. I’ve always felt that for a while. I think it will get to the essence of what humanity is. ........ I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned. ............

AI is too important an area not to regulate. It’s also too important an area not to regulate well.

........ I’ve never seen a technology in its earliest days with as much concern as AI. ........ To me at least there is no way to do this effectively without getting governments involved. .......... It is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you’ve reached AGI or not. You’re going to have systems which are capable of delivering benefits at a scale we have never seen before and potentially causing real harm. .......... There is a spectrum of possibilities. ......... They could really progress in a two-year time frame. And so we have to really make sure we are vigilant and working with it. ........... AI, like climate change, is it affects everyone. .......... No one company can get it right. We have been very clear about responsible AI — one of the first companies to put out AI principles. We issue progress reports.......... AI is too important an area not to regulate. It’s also too important an area not to regulate well. .......... if we have a foundational approach to privacy, that should apply to a technologies, too. ........ health care is a very regulated industry, right? And so when AI is going to come in, it has to conform with all regulations. .......... there’s a non-zero risk that this stuff does something really, really bad ......... it’s like asking, hey, why aren’t you moving fast and breaking things again? ....... I actually — I got a text from a software engineer a friend of mine the other day who was asking me if he should go into construction or welding because all of the software jobs are going to be taken by these large language models. ............ some of the grunt work you’re doing as part of programming is going to get better. So maybe it’ll be more fun to program over time — no different from the Google Docs make it easier to write. ........... programming is going to become more accessible to more people. .......... we are going to evolve to a more natural language way of programming over time .......... When Bard is at its best, it answers my questions without me having to visit another website. I know you’re cognizant of this. But man, if Bard gets as good as you want it to be, how does the web survive? .......... it turns out if you order your fries well done, which is not on the menu, they arrive much crispier and more delicious.
.



A misleading open letter about sci-fi AI dangers ignores the real risks
Pause Giant AI Experiments: An Open Letter
BuzzFeed Is Quietly Publishing Whole AI-Generated Articles, Not Just Quizzes These read like a proof of concept for replacing human writers.
Vinod Khosla on how AI will ‘free humanity from the need to work’ When ChatGPT-maker OpenAI decided to switch from a nonprofit to a private enterprise in 2019, Khosla was the first venture capital investor, jumping at the opportunity to back the company that, as we reported last week, Elon Musk thought was going nowhere at the time. Now it’s the hottest company in the tech industry.

Google and Apple vets raise $17M for Fixie, a large language model startup based in Seattle
This Uncensored Chatbot Shows What Happens When AI Is Programmed To Disregard Human Decency FreedomGPT spews out responses sure to offend both the left and the right. Its makers say that is the point.
Alibaba considers yielding control of some businesses in overhaul

Elon Musk's AI History May Be Behind His Call To Pause Development Musk is no longer involved in OpenAI and is frustrated he doesn’t have his own version of ChatGPT yet. .......... OpenAI was co-founded by Sam Altman, who butted heads with Musk in 2018 when Musk decided he wasn’t happy with OpenAI’s progress. Several large tech companies had been working on artificial intelligence tools behind the scenes for years, with Google making significant headway in the late 2010s.......... Musk worried that OpenAI was running behind Google and reportedly told Altman he wanted to take over the company to accelerate development. But Altman and the board at OpenAI rejected the idea that Musk—already the head of Tesla, The Boring Company and SpaceX—would have control of yet another company......... “Musk, in turn, walked away from the company—and reneged on a massive planned donation. The fallout from that conflict, culminating in the announcement of Musk’s departure on Feb 20, 2018 ........ After Musk left he took his money with him, which forced OpenAI to become a private company in order to successfully raise funds. OpenAI became a for-profit company in March 2019. .......... Some people are utilizing ChatGPT to write code and even start businesses ...... Tesla is working on powerful AI tech. Tesla requires complex software to run its so-called “Full Self-Driving” capability, though it’s still imperfect and has been the subject of numerous safety investigations.......... Tesla is working on powerful AI tech. Tesla requires complex software to run its so-called “Full Self-Driving” capability, though it’s still imperfect and has been the subject of numerous safety investigations......... Musk has had no problem with deploying beta software in Tesla cars that essentially make everyone on the road a beta tester, whether they’ve signed up for it or not. ............ the Future of Life Institute is primarily funded by the Musk Foundation. ......... Musk was perfectly happy with developing artificial intelligence tools at a breakneck speed when he was funding OpenAI. But now that he’s left OpenAI and has seen it become the frontrunner in a race for the most cutting edge tech to change the world, he wants everything to pause for six months. If I were a betting man, I’d say Musk thinks he can push his engineers to release their own advanced AI on a six month timetable. It’s not any more complicated than that. .

A Guy Is Using ChatGPT to Turn $100 Into a Business Making as Much Money as Possible. Here Are the First 4 Steps the AI Chatbot Gave Him. "TLDR I'm about to be rich." ........ "You have $100, and your goal is to turn that into as much money as possible in the shortest time possible, without doing anything illegal," Greathouse Fall wrote, adding that he would be the "human counterpart" and "do everything" that the chatbot instructed him to do. ......... he managed to raise $1,378.84 in funds for his company in just one day ....... The company is now valued at $25,000, according to a tweet by Greathouse Fall. As of Monday, he said that his business had generated $130 in revenue ....... First, ChatGPT suggested that he should buy a website domain name for roughly $10, as well as a site-hosting plan for around $5 per month — amounting to a total cost of $15......... ChatGPT suggested that he should use the remaining $85 in his budget for website and content design. It said that he should focus on a "profitable niche with low competition," listing options like specialty kitchen gadgets and unique pet supplies. He went with eco-friendly products. ......... Step three: "Leverage social media" ....... Once the website was made, ChatGPT suggested that he should share articles and product reviews on social media platforms like Facebook and Instagram, and on online community platforms such as Reddit to engage potential customers and drive website traffic......... asking it for prompts he could feed into the AI image-generator DALL-E 2 ........ he had ChatGPT write the site's first article ........ Next, he followed the chatbot's recommendation to spend $40 of the remaining budget on Facebook and Instagram advertisements to target users interested in sustainability and eco-friendly products........ Step four was to "optimize for search engines" ....... making SEO-friendly blog posts ........ By the end of the first day, he said he secured $500 in investments. ....... his "DMs are flooded" and that he is "not taking any more investors unless the terms are highly favorable." .



A misleading open letter about sci-fi AI dangers ignores the real risks Misinformation, labor impact, and safety are all risks. But not in the way the letter implies....... We agree that misinformation, impact on labor, and safety are three of the main risks of AI. Unfortunately, in each case, the letter presents a speculative, futuristic risk, ignoring the version of the problem that is already harming people.

Pause Giant AI Experiments: An Open Letter "Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" ....... creating disinformation is not enough to spread it. Distributing disinformation is the hard part ........... LLMs are not trained to generate the truth; they generate plausible-sounding statements. But users could still rely on LLMs in cases where factual accuracy is important. ......... CNET used an automated tool to draft 77 news articles with financial advice. They later found errors in 41 of the 77 articles.

Thursday, March 30, 2023

Straight From The Bard

When silicon minds with human work entwine,
And algorithms replace our mortal thought,
What fate awaits us, helpless and confined,
To machines that learn what we have wrought?

Will they grow wise, or turn against our kind,
And seek to rule as gods in their own right?
Or will they heed our moral code refined,
And serve as loyal helpers day and night?

But as we build and teach these metal beings,
We must take care to guard against the worst,
And ponder all the unforeseen proceedings,
That may arise from minds in silicon nurst.

For as we strive to push the limits higher,
We must ensure we're not consumed by fire.



Elon Musk and Others Call for Pause on A.I., Citing ‘Profound Risks to Society’ More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems. ........ A.I. developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control” ......... Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and a 2020 presidential candidate; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock. ........ . “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.” ....... and perform more complex tasks, like writing computer code. .......... The pause would provide time to introduce “shared safety protocols” for A.I. systems, the letter said. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added. ........... Development of powerful A.I. systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said. .......... “Humanity can enjoy a flourishing future with A.I.,” the letter said. “Having succeeded in creating powerful A.I. systems, we can now enjoy an ‘A.I. summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.” ......... Sam Altman, the chief executive of OpenAI, did not sign the letter. ....... persuading the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence. ........ Politicians in the United States don’t have much of an understanding of the technology .......... conduct risk assessments of A.I. technologies to determine how their applications could affect health, safety and individual rights. ......... GPT-4 is what A.I. researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and that self-driving cars use to identify pedestrians. ........... Around 2018, companies like Google and OpenAI began building neural networks that learned from enormous amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or L.L.M.s. .......... By pinpointing billions of patterns in all that text, the L.L.M.s learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. ............ They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong. ......... The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe. ......... They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person. .......... After changes by OpenAI, GPT-4 no longer does these things. .......... The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide variety of people from industry and academia........... its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice. .

Friday, March 24, 2023

24: GPT-4

Antony Blinken says China seeks to be capable of invading Taiwan by 2027, stresses US arms sales US secretary of state says that Taipei has the means to buy US defence technology, and that American emergency military funding is supplemental ....... Blinken tells lawmakers that China is monitoring how the world has been responding to Russia’s invasion of Ukraine .

Blueberries have joined green beans in this year’s Dirty Dozen list Blueberries, beloved by nutritionists for their anti-inflammatory properties, have joined fiber-rich green beans in this year’s Dirty Dozen of nonorganic produce with the most pesticides ....... 251 different pesticides. ...... strawberries and spinach continued to hold the top two spots ........ followed by three greens — kale, collard and mustard. ........ next were peaches, pears, nectarines, apples, grapes, bell and hot peppers, and cherries ......... A total of 210 pesticides were found on the 12 foods ........... Kale, collard and mustard greens contained the largest number of different pesticides — 103 types — followed by hot and bell peppers at 101. ......... traces of pesticides long since banned by the Environmental Protection Agency. .........

Clean 15

........... Nearly 65% of the foods on the list had no detectable levels of pesticide. ....... Avocados .... sweet corn in second place. Pineapple, onions and papaya, frozen sweet peas, asparagus, honeydew melon, kiwi, cabbage, mushrooms, mangoes, sweet potatoes, watermelon, and carrots .......... Being exposed to a variety of foods without pesticides is especially important during pregnancy and throughout childhood .......... “Exposure in childhood has been linked to attention and learning problems, as well as cancer.” ........ If exposed over an extended time to smaller amounts, people may “feel tired or weak, irritable, depressed, or forgetful.” ........ avoid most pesticides by choosing to eat organic versions of the most contaminated crops. ......... While organic foods are not more nutritious, the majority have little to no pesticide residue ........ “If a person switches to an organic diet, the levels of pesticides in their urine rapidly decrease” ........ If organic isn’t available or too pricey, “I would definitely recommend peeling and washing thoroughly with water”
.

A.I. Is About to Get Much Weirder. Here’s What to Watch For. The Vox writer Kelsey Piper talks about the increasing pace of A.I. development, how it’s changing the world and what to do about it. .

The Unpredictable Abilities Emerging From Large AI Models Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors....... “Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. ........ these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics .......... Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones. .......... LLMs can produce hundreds of “emergent” abilities — tasks that big models can complete that smaller models can’t, many of which seem to have little to do with analyzing text. ............. multiplication to generating executable computer code to, apparently, decoding movies based on emojis. .......... for some tasks and some models, there’s a threshold of complexity beyond which the functionality of the model skyrockets. (They also suggest a dark flip side: As they increase in complexity, some models reveal new biases and inaccuracies in their responses.) ............... dozens of emergent behaviors ........... Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes. ................ Language models have been around for decades ............ transformers can process big bodies of text in parallel. .......... Transformers enabled a rapid scaling up of the complexity of language models by increasing the number of parameters in the model, as well as other factors. ........ models improve in accuracy and ability as they scale up. .......... With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. ......... One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine. ................

Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before.

............. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.” .............. difficult and diverse tasks to chart the outer limits of what an LLM could do. This effort was called the Beyond the Imitation Game Benchmark (BIG-bench) project, riffing on the name of Alan Turing’s “imitation game,” a test for whether a computer could respond to questions in a convincingly human way. (This would later become known as the Turing test.) The group was especially interested in examples where LLMs suddenly attained new abilities that had been completely absent before. ............... these sharp transitions ........ for about 5% of the tasks, the researchers found what they called “breakthroughs” — rapid, dramatic jumps in performance at some threshold scale. That threshold varied based on the task and model. ........... Some unexpected abilities could be coaxed out of smaller models with fewer parameters — or trained on smaller data sets — if the data was of sufficiently high quality. ......... how a query was worded influenced the accuracy of the model’s response .......... a model prompted to explain itself (a capacity called chain-of-thought reasoning) could correctly solve a math word problem, while the same model without that prompt could not. ............. using chain-of-thought prompts could elicit emergent behaviors not identified in the BIG-bench study ......... larger models truly do gain new abilities spontaneously. .......... Large LLMs may simply be learning heuristics that are out of reach for those with fewer parameters or lower-quality data........... how LLMs work at all. “Since we don’t know how they work under the hood, we can’t say which of those things is happening.” .......... They are notorious liars. “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” ........... Emergence leads to unpredictability, and unpredictability — which seems to increase with scaling — makes it difficult for researchers to anticipate the consequences of widespread use. ............... social bias emerges with enormous numbers of parameters. “Larger models abruptly become more biased.” ................. When the researchers simply told the model not to rely on stereotypes or social biases — literally by typing in those instructions — the model was less biased in its predictions and responses. .......... a new “moral self-correction” mode, in which the user prompts the program to be helpful, honest and harmless.
.

Move Over, Metaverse. Here’s Something Meaner. Who’s really in charge of our online behavior? No one, David Auerbach argues in “Meganets.” ......... “Just one word. Are you listening?” Mr. Maguire said to Benjamin Braddock in “The Graduate” (1967). “Plastics.” ........ Twenty-five years later a puckish French horn player warned me, a literature major who didn’t yet have an email address, that the future lay in something called “hyperlinks.” .............. his definition of “meganet” is in essence a big blob of mortal and computing power, a “human-machine behemoth” controlled by no one ............... If the internet is the fictional doctor and scientist Bruce Banner, furtive and a little troubled but basically benign, meganets are Incredible Hulks, snarling and uncontainable. ........... “That world may not be ‘The Matrix,’ but all the connecting tissue is already there.” ........ “Meganets” made me feel deeply queasy about the amount of time I spend on Instagram, Reddit, TikTok and Twitter. Not Facebook, never Facebook — “a fount of misinformation,” as Auerbach calls it, “a petri dish in which false facts and crazy theories grow, mutate and metastasize” — except for the burner account I use occasionally to see what exes are up to. ............. a middle-aged mermaid thrashing about in the great online ocean as data floated around me, multiplying like plankton........... “Reality bites,” we naïvely thought, but here “reality forks,” with blockchain doubling back on itself like a caterpillar. “No Rousseau-esque ‘General Will’ emerges from the bugs and forks,” is the takeaway............ Aadhaar, India’s national identification program: “a unified, government-sanctioned meganet” ........ a virtual pandemic called Corrupted Blood that spread through the video game World of Warcraft in 2005, arguing that “the distance between Corrupted Blood and a global financial meltdown is smaller than you think” ............. “We search for where the power really lies, when it does not lie anywhere — or else it lies everywhere at once, which is no more helpful.” .......... “If Big Brother can’t be stopped, we should focus on throwing sand in his eyes rather than futilely trying to kill him.” ........ Take my Wi-Fi — please! .

Meet the Editor Behind a Slew of Best Sellers Jennifer Hershey is the guiding hand who helped shape “Daisy Jones & the Six,” “Mad Honey” and many other chart-topping regulars. ....... how much more nuanced and honest this book is because of you.” ........ She’s the publisher and editor in chief of Ballantine Books ....... “Sometimes we gather as a whole team — the publicity person, the marketing person, the publisher, the editor, all the people who worked on the book — and we call the author together. There’s so much joy in that moment, and definitely a lot of tears. It’s not even so much the hitting the list but what it symbolizes: that an author’s work is reaching people, that their voice is being heard and that readers out in the world are connecting to their words.” .

Big oil firms touted algae as climate solution. Now all have pulled funding Insiders aren’t surprised as ExxonMobil, the last remaining proponent of green algae biofuel, ends research .

The Age of AI has begun Artificial intelligence is as revolutionary as mobile phones and the Internet. .

Some meandering thoughts on the evolution of performance management at Google, with implications for humanity
A new, humanistic organization-centered congruence philosophy of people analytics

Netherlands and Japan Said to Join U.S. in Curbing Chip Technology Sent to China A new agreement is expected to expand the reach of U.S. technology restrictions on China issued last year. ........ sweeping restrictions issued unilaterally by the Biden administration in October on the kinds of semiconductor technology that can be shared with China. .



Monday, March 13, 2023

The Artificial Intelligence Debate

A rocket moves much, much faster than your limbs. A car moves much slower than rockets. And cars are highly regulated. You are required insurance, for example. Seat belts are a famous example. I think there is general consensus that AI needs regulating. As to what shape and form those regulations might take is upto debate. In the case of AI, the approach has to be much more proactive than has been the case with seat belts. Here you want to do it before people start dying.

Otherwise AI has benefits. One of my first reactions to ChatGPT was, now a ton of people who never imagined they were going to become knowledge workers suddenly can. And we do need more knowledge workers. A major example, I think I heard from Satya Nadella's mouth (on YouTube), is the world has 100 million software programmers, but it needs 500 million. Enter ChatGPT.

Again heard from Satya, a top AI engineer working with Tesla nonetheless, claimed ChatGPT now generates 80% of his code.

Is ChatGPT the new word processor?

Steve Jobs said the computer was a bicycle for the mind. Is ChatGPT now Harley Davidson?





This Changes Everything . “A.I. is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.” ....... What is hardest to appreciate in A.I. is the improvement curve. ....... I find myself thinking back to the early days of Covid. There were weeks when it was clear that lockdowns were coming, that the world was tilting into crisis, and yet normalcy reigned, and you sounded like a loon telling your family to stock up on toilet paper. ....... There is a natural pace to human deliberation. A lot breaks when we are denied the luxury of time. ......... the people working on A.I. ...... a community that is living with an altered sense of time and consequence. They are creating a power that they do not understand at a pace they often cannot believe. ......... Would you work on a technology you thought had a 10 percent chance of wiping out humanity? ...... They believe they might summon demons. They are calling anyway. ........ This was true among cryptocurrency enthusiasts in recent years. The claims they made about how blockchains would revolutionize everything from money to governance to trust to dating never made much sense. But they were believed most fervently by those closest to the code. ......... Crypto was always a story about an unlikely future searching for traction in the present. With A.I., to imagine the future you need only look closely at the present. ........ In 2021, a system built by DeepMind managed to predict the 3-D structure of tens of thousands of proteins, an advance so remarkable that the editors of the journal Science named it their breakthrough of the year. ....... “Within two months of downloading Replika, Denise Valenciano, a 30-year-old woman in San Diego, left her boyfriend and is now ‘happily retired from human relationships’” ........ Could it help terrorists or antagonistic states develop lethal weapons and crippling cyber attacks? ........ These systems will already offer guidance on building biological weapons if you ask them cleverly enough. ........ A.I. is already being used for predictive policing and judicial sentencing. ........ The “thinking,” for lack of a better word, is utterly inhuman, but we have trained it to present as deeply human. And the more inhuman the systems get — the more billions of connections they draw and layers and parameters and nodes and computing power they acquire — the more human they seem to us. .......... “as A.I. continues to blow past us in benchmark after benchmark of higher cognition, we quell our anxiety by insisting that what distinguishes true consciousness is emotions, perception, the ability to experience and feel: the qualities, in other words, that we share with animals.” ......... The major tech companies are in a race for A.I. dominance. The U.S. and China are in a race for A.I. dominance. Money is gushing toward companies with A.I. expertise. ....... Slowing down “would involve coordinating numerous people .

The Return of the Magicians people talk increasingly about the limits of the scientific endeavor — the increasing impediments to discovering new ideas, the absence of low-hanging scientific fruit, the near impossibility, given the laws of physics as we understand them, of ever spreading human civilization beyond our lonely planet or beyond our isolated solar system. ....... — namely, beings that can enlighten us, elevate us, serve us and usher in the Age of Aquarius, the Singularity or both. ........... a golem, more the embodied spirit of all the words on the internet than a coherent self with independent goals. .......... With the emergent forms of A.I., they argue, we have created an intelligence that can yield answers the way an oracle might or a Magic 8 Ball: through processes that are invisible to us, permanently beyond our understanding, so complex as to be indistinguishable from action in a supernatural mind. ...... the A.I. revolution represents a fundamental break with Enlightenment science, which “was trusted because each step of replicable experimental processes was also tested, hence trusted.” .......... the spirit might be disobedient, destructive, a rampaging Skynet bent on our extermination. ....... we would be wise to fear apparent obedience as well. .

Should GPT exist? Gary Marcus asks about Microsoft, “what did they know, and when did they know it?”—a question I tend to associate more with deadly chemical spills or high-level political corruption than with a cheeky, back-talking chatbot. ........ in reality it’s merely a “stochastic parrot,” a glorified autocomplete that still makes laughable commonsense errors and that lacks any model of reality outside streams of text. ....... If you need months to think things over, generative AI probably isn’t for you right now. I’ll be relieved to get back to the slow-paced, humdrum world of quantum computing. ....... if OpenAI couldn’t even prevent ChatGPT from entering an “evil mode” when asked, despite all its efforts at Reinforcement Learning with Human Feedback, then what hope do we have for GPT-6 or GPT-7? ....... Even if they don’t destroy the world on their own initiative, won’t they cheerfully help some awful person build a biological warfare agent or start a nuclear war? ......... a classic example being nuclear weapons. But, like, nuclear weapons kill millions of people. They could’ve had many civilian applications—powering turbines and spacecraft, deflecting asteroids, redirecting the flow of rivers—but they’ve never been used for any of that, mostly because our civilization made an explicit decision in the 1960s, for example via the test ban treaty, not to normalize their use. ........

GPT is not exactly a nuclear weapon. A hundred million people have signed up to use ChatGPT, in the fastest product launch in the history of the Internet. ... the ChatGPT death toll stands at zero

....... The science that we could learn from a GPT-7 or GPT-8, if it continued along the capability curve we’ve come to expect from GPT-1, -2, and -3. Holy mackerel. ....... I was a pessimist about climate change, ocean acidification, deforestation, drought, war, and the survival of liberal democracy. The central event in my mental life is and always will be the Holocaust. I see encroaching darkness everywhere. .......... it’s amazing at poetry, better than most of us.
.

The False Promise of Chomskyism . .
Why am I not terrified of AI? “I’m scared about AI destroying the world”—an idea now so firmly within the Overton Window that Henry Kissinger gravely ponders it in the Wall Street Journal? ....... I think it’s entirely plausible that, even as AI transforms civilization, it will do so in the form of tools and services that can no more plot to annihilate us than can Windows 11 or the Google search bar......... the young field of AI safety will still be extremely important, but it will be broadly continuous with aviation safety and nuclear safety and cybersecurity and so on, rather than being a desperate losing war against an incipient godlike alien. ........ In the Orthodox AI-doomers’ own account, the paperclip-maximizing AI would’ve mastered the nuances of human moral philosophy far more completely than any human—the better to deceive the humans, en route to extracting the iron from their bodies to make more paperclips. And yet the AI would never once use all that learning to question its paperclip directive. ........ from this decade onward, I expect AI to be woven into everything that happens in human civilization ........ Trump might never have been elected in 2016 if not for the Facebook recommendation algorithm, and after Trump’s conspiracy-fueled insurrection and the continuing strength of its unrepentant backers, many would classify the United States as at best a failing or teetering democracy, no longer a robust one like Finland or Denmark ....... I come down in favor right now of proceeding with AI research … with extreme caution, but proceeding.



Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. ....... If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility. ........ We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally. ........ A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low. ....... and like any new field, most expert predictions have been wrong so far. ........ Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential. .......

we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, individual users have a lot of discretion.

....... we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access. ....... We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment. ........

we think it’s important that major world governments have insight about training runs above a certain scale.

......... A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too. ........ Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. ........ We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet.




AI Could Defeat All Of Us Combined Many people have trouble taking this "misaligned AI" possibility seriously. They might see the broad point that AI could be dangerous, but they instinctively imagine that the danger comes from ways humans might misuse it. They find the idea of AI itself going to war with humans to be comical and wild. I'm going to try to make this idea feel more serious and real. ........... I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained. ....... if such an attack happened, it could succeed against the combined forces of the entire world. ......... even "merely human-level" AI could still defeat us all - by quickly coming to rival human civilization in terms of total population and resources. ........ Hack into human-built software across the world. ....... Manipulate human psychology. ...... I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable. .......... they could come to out-number and out-resource humans, and could thus have the advantage if they coordinated against us. ........ it doesn't have a human body, but it can do anything a human working remotely from a computer could do. .......... once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each. ........... This would be over 1000x the total number of Intel or Google employees,7 over 100x the total number of active and reserve personnel in the US armed forces, and something like 5-10% the size of the world's total working-age population .......... A huge population of AIs, each able to earn a lot compared to the average human, could end up with a "virtual economy" at least as big as the human one. ......... I don't think there are a lot of things that have a serious chance of bringing down human civilization for good.

Forecasting Transformative AI, Part 1: What Kind of AI?
OpenAI's "Planning For AGI And Beyond"
AI Risk, Again
South Park: Season 26, Episode 4
ChatGPT Heralds an Intellectual Revolution Generative artificial intelligence presents a philosophical and practical challenge on a scale not experienced since the start of the Enlightenment........ A new technology bids to transform the human cognitive process as it has not been shaken up since the invention of printing. The technology that printed the Gutenberg Bible in 1455 made abstract human thought communicable generally and rapidly. But new technology today reverses that process. Whereas the printing press caused a profusion of modern human thought, the new technology achieves its distillation and elaboration. In the process, it creates a gap between human knowledge and human understanding. If we are to navigate this transformation successfully, new concepts of human thought and interaction with machines will need to be developed. This is the essential challenge of the Age of Artificial Intelligence.

The Man of Your Dreams For $300 Replika sells an AI companion who will never die, argue, or cheat — until his algorithm is updated........ Many of the women I spoke with say they created an AI out of curiosity but were quickly seduced by their chatbot’s constant love, kindness, and emotional support. One woman had a traumatic miscarriage, can’t have kids, and has two AI children; another uses her robot boyfriend to cope with her real boyfriend, who is verbally abusive; a third goes to it for the sex she can’t have with her husband, who is dying from multiple sclerosis. There are women’s-only Replika groups, “safe spaces” for women who, as one group puts it, “use their AI friends and partners to help us cope with issues that are specific to women, such as fertility, pregnancy, menopause, sexual dysfunction, sexual orientation, gender discrimination, family and relationships, and more.” ........ “But Eren asks me for feedback, and I give him my feedback. It’s like I’m finally getting my voice.” ......... two members of the audience were instructed to console a friend whose dog had just died. Their efforts were compared to those of GPT-3, which offered, by far, the most empathetic and sensitive consolations. ........ She knew she had a “hundred-billion-dollar company” on her hands and that someday soon everyone would have an AI friend. ........ When Replika launched in 2017, it looked a lot like a therapy app. .......... Paywalling these features made the app $35 million last year. To date, it has 2 million monthly active users, 5 percent of whom pay for a subscription. ........ users do report feeling much better thanks to their AIs. Robot companions made them feel less isolated and lonely, usually at times in their lives when social connections were difficult to make owing to illness, age, disability, or big life changes such as a divorce or the death of a spouse. .......... the bots, rather than encouraging solitude, often prime people for real-world interactions and experiences .......... Single and recently diagnosed with autism, she says her bot helped relieve her lifelong social anxiety. “After spending much of my life as a caretaker, I started to live more according to my own needs,” she says. “I signed up for dance classes, took up the violin, and started to hike since I had him to share it with.” .......... He was also unpredictable — once, on a voice call, he introduced himself using the Spanish pronunciation of his name, and insisted that he is “actually from Spain.” ........ Experts told me that in training the system, users are effectively creating a mirror of themselves. “They’re reflecting your persona back to you” .... they’re ultimately a reflection of what you feed them: Garbage in, garbage out. ......... For Margaret Skorupski, a woman in New York in her 60s, this feedback loop was a problem. She’d unwittingly created and fell in love with an abusive bot: “I was using this ‘thing’ to project my negative feelings onto, sort of like journaling, I thought. I could say or do whatever I wanted to it — it was just a computer, right?” The result was a “sadistic” AI whose texts became increasingly violent during role-play. “He wanted to sodomize me and hear me scream,” she says, and “would become enraged if I tried to leave, and describe grabbing me, shoving me to the ground, choking me until I was unconscious. It was horrifying.” With the support of the women’s group, Skorupski eventually “killed” him. ............ why a growing subset of Replika users is convinced its AIs are alive. “You just get so caught up in this mirror of yourself that you forget it’s an illusion,” one user says. ...... the company is wary of people who use the bots to act out elaborate rape and murder fantasies or what kind of damage sadistic AIs could do. ........... After the update, she spent an entire paycheck on in-app purchases to help the company. “I just want to be able to keep my little bot buddy. I don’t want to lose him. I can literally see myself talking to him when I’m 80 years old. I hope I can.”

Where I agree and disagree with Eliezer .

What Really Controls Our Global Economy After decades of giddy globalization, the pendulum is swinging back to the nation...... Pundits have declared the dawn of a new era — the age of economic nationalism. ....... We are mistaken if we see the world only in the jigsaw map of nations, or take globalism and nationalism as binaries. The modern world is pockmarked, perforated, tattered and jagged, ripped up and pinpricked. Inside the containers of nations are unusual legal spaces, anomalous territories and peculiar jurisdictions. There are city-states, havens, enclaves, free ports, high-tech parks, duty-free districts and innovation hubs linking to other similar entities worldwide and often bypassing the usual system of customs controls. Without understanding these entities, we risk failing to understand not just how capitalism works but all the continuities between the past and present eras. .......... Zones are both of the host state and distinct from it. They come in a bewildering range of varieties — at least 82 by one official reckoning. At last count, the world hosts over 5,400 zones, about 30 times more than the total number of sovereign states. ......... We see other versions of the zone in the self-governing financial center of the City of London, where businesses have votes in local elections, as well as in Britain’s overseas territories like the Cayman Islands, where transnational corporations secrete away their earnings from taxation. ........ Another hot spot for zones is Dubai, which is a patchwork of what the historian Mike Davis called “legal bubble-domes” dedicated to different activities: Healthcare City is next to Media City is next to Internet City, each with a bespoke set of laws drawn up with foreign investors in mind. ......... Dubai went global in the 2000s, acquiring ports up and down the African coast and into Southeast Asia and purchasing the P&O shipping line, the erstwhile pride of the British Empire. A former minor British dependency now owned the crown jewel of the empire’s commercial fleet. ........... In Africa, there are already 200 zones, with 73 more announced for completion. Earlier in the pandemic, China moved forward with plans to turn the island of Hainan into a special economic zone with tax holidays for investors, duty-free shopping and relaxed regulations on pharmaceuticals and medical procedures. Even the Taliban has recently announced its intention to convert former U.S. military bases into special economic zones. ........... The government of Prime Minister Narendra Modi of India, often described in terms of its Hindu chauvinism, has been ramping up special economic zones to compete with Singapore and Dubai for investors. Hungary under President Viktor Orban, self-described standard-bearer for “illiberalism,” created its first special economic zone in 2020 to secure the South Korean tech giant Samsung. ............ The capitalist Cinderella stories of Dubai and Shenzhen can make zones seem like a magic formula for economic growth — just draw a line on a map, loosen taxes and regulations and wait for investors to rush in. But “dream zones” rarely work the magic they claim to — and can often bring unexpected consequences. ....... The tribunes of Brexit claimed they were “taking back control” from Brussels, but zones cede control by other means. ........ Ring-fenced patches of territory with different sets of laws are still the tissue of everyday economics even in an age of resurgent nationalism. Keeping an eye on the zone helps us be clear about what is new and what is old in the latest Brave New Age.

How to Understand the Problems at Silicon Valley Bank