Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

Here’s a general example before we had controls for burners in a boiler there was a person (a general intelligence) who would monitor the feedback from the boiler and would adjust the burners by changing burners / tips/ pressures.

Before the pneumatic control valve was invented a person had to do the same thing in steam systems by manually adjusting a globe valve.

In rocketry you had astronauts manually controlling burn to land on the moon.

A general intelligence (a person) can get into a car and drive it on the road.

That is to say being an (imperfect) universal controller is a characteristic of of our general intelligence.

I don’t think it’s an extraordinary claim that an artificial general intelligence would be as well.
What are you actually asking here? Is it possible? Sure. But it doesn't follow from first principles or anything like that.

Adbot
ADBOT LOVES YOU

Bar Ran Dun
Jan 22, 2006




It rather does follow from first principles if conscious arises from feed back loops.

SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

It rather does follow from first principles if conscious arises from feed back loops.
Evolution is a feedback loop. Fish arose from evolution. Birds arose from evolution. That doesn't imply that fish can fly or birds can swim.

Bar Ran Dun
Jan 22, 2006




SubG posted:

Evolution is a feedback loop. Fish arose from evolution. Birds arose from evolution. That doesn't imply that fish can fly or birds can swim.

No they example of evolution does support that a feed back loop that isn’t even a general intelligence generated controls for flight (several times) and swimming (again several times). I was only arguing that it’s for GI and AGI , you’ve managed to pick an example that suggests potentially all feedback loops could generated controllers given enough iteration.

SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

No they example of evolution does support that a feed back loop that isn’t even a general intelligence generated controls for flight (several times) and swimming (again several times). I was only arguing that it’s for GI and AGI , you’ve managed to pick an example that suggests potentially all feedback loops could generated controllers given enough iteration.
I'm still not sure what you're actually trying to argue but sure. You say it can be proven from first principles, great. Go ahead and do so.

Bar Ran Dun
Jan 22, 2006




SubG posted:

Go ahead and do so.

This is extremely straightforward Subg. You perceive output signals with senses. You can make changes to systems you interact with to affect those output signals.

That’s a controller, that’s what a feed back control is. General intelligences are controls. It’s not all they are, but it’s an essential characteristic.

For an AGI anything you would be able to give a feed back signal from, a means to affect, and choose a goal / set point for it could be a controller for.

SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

This is extremely straightforward Subg. You perceive output signals with senses. You can make changes to systems you interact with to affect those output signals.

That’s a controller, that’s what a feed back control is. General intelligences are controls. It’s not all they are, but it’s an essential characteristic.

For an AGI anything you would be able to give a feed back signal from, a means to affect, and choose a goal / set point for it could be a controller for.
So are you arguing that all control problems are isomorphic? That all feedback loops are isomorphic? That any system that can solve some specific exemplar problem (like regulating a pneumatic valve in some unspecified way) can solve all possible control problems? All solvable control problems? What?

Bar Ran Dun
Jan 22, 2006




You are spinning a whole lotta wheels there.

I’m only making a single very basic assertion. General intelligences (our brains) can function as imperfect universal feedback controllers therefore artificial general intelligences will be able to function as universal feedback controllers.

Edit: if feed back loops are essential to consciousness.

KillHour
Oct 28, 2007


As far as I can tell, your assertion is just trivially true by definition. If you define an AGI as something that can generally solve arbitrary tasks (albeit imperfectly), then anything that can generally solve arbitrary tasks is an AGI.

Now that doesn't mean just because x is an AGI, all AGIs are x. There could be AGIs that aren't x. But if x fulfills the formal requirements to be an AGI, it is one.

SubG
Aug 19, 2004

It's a hard world for little things.

KillHour posted:

As far as I can tell, your assertion is just trivially true by definition. If you define an AGI as something that can generally solve arbitrary tasks (albeit imperfectly), then anything that can generally solve arbitrary tasks is an AGI.
Yeah, it's either tautologically true (if you define "universal" to mean "solvable by the an AGI") or it's pure conjecture (if you define "universal" to mean "any problem", or even "any solvable problem").

And in either case it doesn't demonstrate that you can construct the solution, just that it exists.

Like consider the set of all things that have ever been solved by humans. It is tautologically true that humans can solve all the problems in that set. But that doesn't mean that any given human selected at random can solve all of the problems in the set. It doesn't mean that any arbitrary subset of humans can solve all the problems in the set. It doesn't mean that there's any solution better than just brute force throwing people at the problems until they're all solved. If you have a process for generating humans (I mean beyond the one humans have always used) it doesn't mean that there's some mechanism by which you could generate a human capable of solving all of the problems in the set.

Bar Ran Dun
Jan 22, 2006




Being a controller isn’t solving a task. It’s adjusting towards a set point.

Think of a pipe with a flow rate of ten gallons per second. A feed back controller is a device that receives a signal from the output. And then causes an affect in the input to changes the out put towards a set point.

So if we have a set point of 50 gallons per second the controller receives the signal of 10, and then controls a valve to open allowing more flow through, increasing the flow rate through the pipe to start rising towards the set point.

You can do this with with your brain. You look at the flow meter, oh it’s not fifty, then open the globe valve more. Your brain can be a controller for things it can receive a feed back signal from. This is an essential ability of minds if feed back loops are a requirement for consciousness to exist.

A artificial general intelligence is going to be able to be a controller for anything we can give it a digital feedback signal from.

SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

A artificial general intelligence is going to be able to be a controller for anything we can give it a digital feedback signal from.
Cool, this will be a great way to solve NP hard problems, then.

KillHour
Oct 28, 2007


A neural network can already do that. That's literally what they do/are.

SubG
Aug 19, 2004

It's a hard world for little things.

KillHour posted:

A neural network can already do that. That's literally what they do/are.
Neural networks have solved the halting problem? Cool. I'd love a link to the paper.

Bar Ran Dun
Jan 22, 2006




KillHour posted:

A neural network can already do that. That's literally what they do/are.

Yes for solved (fully described) systems and for models of systems.

You can train as neural network to be the automation for a power plant (a fully described system). You can’t train one to control inflation (a complex system that can’t be fully described).

SubG
Aug 19, 2004

It's a hard world for little things.

Bar Ran Dun posted:

Yes for solved (fully described) systems and for models of systems.

You can train as neural network to be the automation for a power plant (a fully described system). You can’t train one to control inflation (a complex system that can’t be fully described).
Okay, sure. If we define AGI as something like "something that can solve any already-solved problem" and "universal controller" as "a controller that can solve any solved control problem" then yes, having an AGI implies you have a universal controller.

What was your original question? Or were you just offering this as an observation?

KillHour
Oct 28, 2007


SubG posted:

Neural networks have solved the halting problem? Cool. I'd love a link to the paper.

You keep talking about a lot of different things so I'm really not sure you understand what they mean.

A neural net is a universal feedback controller. That's what training is - it does a thing and an evaluation metric adjusts it based on the output. It's universal in the sense that it can be trained to do any task that could have a good evaluation metric created for. It won't necessarily be perfect at it and it may require an insane amount of memory, but that's not disqualifying.

You brought up NP hard. NP hard does not mean impossible. It means slow. I can write an algorithm that solves the traveling salesman problem in a few lines of code. It can't be solved both perfectly and efficiently, but we already clarified that being perfect is not a requirement and nobody said anything about efficiency.

The halting problem is similarly unrelated. If something is undecidable, a universal feedback controller isn't going to magically make it decidable. The answer to the halting problem is that it's undecidable in the same way the answer to 1/0 is undefined.

"Universal" means the same thing as "general" - that it's not constrained to a small set of predetermined capabilities. It doesn't mean "omnipotent."

Bar Ran Dun posted:

Yes for solved (fully described) systems and for models of systems.

You can train as neural network to be the automation for a power plant (a fully described system). You can’t train one to control inflation (a complex system that can’t be fully described).

You can absolutely train one to control inflation. You just have to give it full control over the economy and tell it what worked and what didn't and after about a million horrific economic crashes, it will probably be pretty good.

It's not practical to do that, but that's irrelevant. People also suck at controlling inflation for pretty much the same reason. We just probably suck less than a neural network hooked up to the stock market.

KillHour fucked around with this message at 03:04 on Jul 1, 2023

Bar Ran Dun
Jan 22, 2006




SubG posted:

Okay, sure. If we define AGI as something like "something that can solve any already-solved problem" and "universal controller" as "a controller that can solve any solved control problem" then yes, having an AGI implies you have a universal controller.

What was your original question? Or were you just offering this as an observation?

No that was about neural networks. I think AGI will be a controller for any system including the complex ones.

SubG
Aug 19, 2004

It's a hard world for little things.

KillHour posted:

"Universal" means the same thing as "general" - that it's not constrained to a small set of predetermined capabilities. It doesn't mean "omnipotent."
Yes. Literally the first thing I said was that either the claim is tautologically true, or it does not follow from first principles. It appears that the original intent was the former—Bar Ran Dun was just making an observation that's true by definition. But since they appeared to be disagreeing with me, I was attempting to figure out what the source of the disagreement was.

Bar Ran Dun
Jan 22, 2006




NYTs dropped another AGI article

https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html?smid=nytcore-ios-share&referringSource=articleShare

This one deals with neoliberalism.

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

Paywall.

Bar Ran Dun
Jan 22, 2006




Another in the Times on AI and math. Sounds like the math models are coming.

https://www.nytimes.com/2023/07/02/science/ai-mathematics-machine-learning.html?smid=nytcore-ios-share&referringSource=articleShare

Many of us will have used tools that automate calculations even complex stuff. Apparently the models are working towards automating the math reasoning side.


Last I checked it was discouraged to copy paste the articles or to tell folks about how to bypass pay walls. If anyone has a problem with me posting the text just let me know, I’m going to for both for gynovore.

Bar Ran Dun
Jan 22, 2006




nyts posted:


In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned.

This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.

Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, Senator Chuck Schumer called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders.

The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.

A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing.

Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things.

Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.

Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”

This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.

They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.

But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.

Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.

Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.

Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.

Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.

It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.

Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).

These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.

A.G.I. will never overcome the market’s demands for profit.

Remember when Uber, with its cheap rates, was courting cities to serve as their public transportation systems?

It all began nicely, with Uber promising implausibly cheap rides, courtesy of a future with self-driving cars and minimal labor costs. Deep-pocketed investors loved this vision, even absorbing Uber’s multibillion-dollar losses.

But when reality descended, the self-driving cars were still a pipe dream. The investors demanded returns and Uber was forced to raise prices. Users that relied on it to replace public buses and trains were left on the sidewalk.

The neoliberal instinct behind Uber’s business model is that the private sector can do better than the public sector — the market bias.

It’s not just cities and public transit. Hospitals, police departments and even the Pentagon increasingly rely on Silicon Valley to accomplish their missions.

With A.G.I., this reliance will only deepen, not least because A.G.I. is unbounded in its scope and ambition. No administrative or government services would be immune to its promise of disruption.

Moreover, A.G.I. doesn’t even have to exist to lure them in. This, at any rate, is the lesson of Theranos, a start-up that promised to “solve” health care through a revolutionary blood-testing technology and a former darling of America’s elites. Its victims are real, even if its technology never was.

After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.

As always, Silicon Valley mavens play down the market’s role. In a recent essay titled “Why A.I. Will Save the World,” Marc Andreessen, a prominent tech investor, even proclaims that A.I. “is owned by people and controlled by people, like any other technology.”

Only a venture capitalist can traffic in such exquisite euphemisms. Most modern technologies are owned by corporations. And they — not the mythical “people” — will be the ones that will monetize saving the world.

And are they really saving it? The record, so far, is poor. Companies like Airbnb and TaskRabbit were welcomed as saviors for the beleaguered middle class; Tesla’s electric cars were seen as a remedy to a warming planet. Soylent, the meal-replacement shake, embarked on a mission to “solve” global hunger, while Facebook vowed to “solve” connectivity issues in the Global South. None of these companies saved the world.

A decade ago, I called this solutionism, but “digital neoliberalism” would be just as fitting. This worldview reframes social problems in light of for-profit technological solutions. As a result, concerns that belong in the public domain are reimagined as entrepreneurial opportunities in the marketplace.

A.G.I.-ism has rekindled this solutionist fervor. Last year, Mr. Altman stated that “A.G.I. is probably necessary for humanity to survive” because “our problems seem too big” for us to “solve without better tools.” He’s recently asserted that A.G.I. will be a catalyst for human flourishing.

But companies need profits, and such benevolence, especially from unprofitable firms burning investors’ billions, is uncommon. OpenAI, having accepted billions from Microsoft, has contemplated raising another $100 billion to build A.G.I. Those investments will need to be earned back — against the service’s staggering invisible costs. (One estimate from February put the expense of operating ChatGPT at $700,000 per day.)

Thus, the ugly retrenchment phase, with aggressive price hikes to make an A.G.I. service profitable, might arrive before “abundance” and “flourishing.” But how many public institutions would mistake fickle markets for affordable technologies and become dependent on OpenAI’s expensive offerings by then?

And if you dislike your town outsourcing public transportation to a fragile start-up, would you want it farming out welfare services, waste management and public safety to the possibly even more volatile A.G.I. firms?

A.G.I. will dull the pain of our thorniest problems without fixing them.

Neoliberalism has a knack for mobilizing technology to make society’s miseries bearable. I recall an innovative tech venture from 2017 that promised to improve commuters’ use of a Chicago subway line. It offered rewards to discourage metro riders from traveling at peak times. Its creators leveraged technology to influence the demand side (the riders), seeing structural changes to the supply side (like raising public transport funding) as too difficult. Tech would help make Chicagoans adapt to the city’s deteriorating infrastructure rather than fixing it in order to meet the public’s needs.

This is the adaptation bias — the aspiration that, with a technological wand, we can become desensitized to our plight. It’s the product of neoliberalism’s relentless cheerleading for self-reliance and resilience.

The message is clear: gear up, enhance your human capital and chart your course like a start-up. And A.G.I.-ism echoes this tune. Bill Gates has trumpeted that A.I. can “help people everywhere improve their lives.”

The solutionist feast is only getting started: Whether it’s fighting the next pandemic, the loneliness epidemic or inflation, A.I. is already pitched as an all-purpose hammer for many real and imaginary nails. However, the decade lost to the solutionist folly reveals the limits of such technological fixes.

To be sure, Silicon Valley’s many apps — to monitor our spending, calories and workout regimes — are occasionally helpful. But they mostly ignore the underlying causes of poverty or obesity. And without tackling the causes, we remain stuck in the realm of adaptation, not transformation.

There’s a difference between nudging us to follow our walking routines — a solution that favors individual adaptation — and understanding why our towns have no public spaces to walk on — a prerequisite for a politics-friendly solution that favors collective and institutional transformation.

But A.G.I.-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They should just adapt to A.G.I., at least according to Mr. Altman, who recently said he was nervous about “the speed with which our institutions can adapt” — part of the reason, he added, “of why we want to start deploying these systems really early, while they’re really weak, so that people have as much time as possible to do this.”

But should institutions only adapt? Can’t they develop their own transformative agendas for improving humanity’s intelligence? Or do we use institutions only to mitigate the risks of Silicon Valley’s own technologies?

A.G.I. undermines civic virtues and amplifies trends we already dislike.

A common criticism of neoliberalism is that it has flattened our political life, rearranging it around efficiency. “The Problem of Social Cost,” a 1960 article that has become a classic of the neoliberal canon, preaches that a polluting factory and its victims should not bother bringing their disputes to court. Such fights are inefficient — who needs justice, anyway? — and stand in the way of market activity. Instead, the parties should privately bargain over compensation and get on with their business.

This fixation on efficiency is how we arrived at “solving” climate change by letting the worst offenders continue as before. The way to avoid the shackles of regulation is to devise a scheme — in this case, taxing carbon — that lets polluters buy credits to match the extra carbon they emit.

This culture of efficiency, in which markets measure the worth of things and substitute for justice, inevitably corrodes civic virtues.

And the problems this creates are visible everywhere. Academics fret that, under neoliberalism, research and teaching have become commodities. Doctors lament that hospitals prioritize more profitable services such as elective surgery over emergency care. Journalists hate that the worth of their articles is measured in eyeballs.

Now imagine unleashing A.G.I. on these esteemed institutions — the university, the hospital, the newspaper — with the noble mission of “fixing” them. Their implicit civic missions would remain invisible to A.G.I., for those missions are rarely quantified even in their annual reports — the sort of materials that go into training the models behind A.G.I.

After all, who likes to boast that his class on Renaissance history got only a handful of students? Or that her article on corruption in some faraway land got only a dozen page views? Inefficient and unprofitable, such outliers miraculously survive even in the current system. The rest of the institution quietly subsidizes them, prioritizing values other than profit-driven “efficiency.”

Will this still be the case in the A.G.I. utopia? Or will fixing our institutions through A.G.I. be like handing them over to ruthless consultants? They, too, offer data-bolstered “solutions” for maximizing efficiency. But these solutions often fail to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.

In fact, the remarkable performance of ChatGPT-like services is, by design, a refusal to grasp reality at a deeper level, beyond the data’s surface. So whereas earlier A.I. systems relied on explicit rules and required someone like Newton to theorize gravity — to ask how and why apples fall — newer systems like A.G.I. simply learn to predict gravity’s effects by observing millions of apples fall to the ground.

However, if all that A.G.I. sees are cash-strapped institutions fighting for survival, it may never infer their true ethos. Good luck discerning the meaning of the Hippocratic oath by observing hospitals that have been turned into profit centers.

Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”

The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.

But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.

It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about A.G.I.

However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.

Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety.” This is premised on the false idea that there’s no alternative to A.G.I.

But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?

Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist.

Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Gynovore
Jun 17, 2009

Forget your RoboCoX or your StickyCoX or your EvilCoX, MY CoX has Blinking Bewbs!

WHY IS THIS GAME DEAD?!

Bar Ran Dun posted:

Last I checked it was discouraged to copy paste the articles or to tell folks about how to bypass pay walls.

Thanks. AFAIK the former is OK.

Bar Ran Dun
Jan 22, 2006




“NYTs” posted:


In the collection of the Getty museum in Los Angeles is a portrait from the 17th century of the ancient Greek mathematician Euclid: disheveled, holding up sheets of “Elements,” his treatise on geometry, with grimy hands.

For more than 2,000 years, Euclid’s text was the paradigm of mathematical argumentation and reasoning. “Euclid famously starts with ‘definitions’ that are almost poetic,” Jeremy Avigad, a logician at Carnegie Mellon University, said in an email. “He then built the mathematics of the time on top of that, proving things in such a way that each successive step ‘clearly follows’ from previous ones, using the basic notions, definitions and prior theorems.” There were complaints that some of Euclid’s “obvious” steps were less than obvious, Dr. Avigad said, yet the system worked.

But by the 20th century, mathematicians were no longer willing to ground mathematics in this intuitive geometric foundation. Instead they developed formal systems — precise symbolic representations, mechanical rules. Eventually, this formalization allowed mathematics to be translated into computer code. In 1976, the four-color theorem — which states that four colors are sufficient to fill a map so that no two adjacent regions are the same color — became the first major theorem proved with the help of computational brute force.

Now mathematicians are grappling with the latest transformative force: artificial intelligence.

In 2019, Christian Szegedy, a computer scientist formerly at Google and now at a start-up in the Bay Area, predicted that a computer system would match or exceed the problem-solving ability of the best human mathematicians within a decade. Last year he revised the target date to 2026.

Akshay Venkatesh, a mathematician at the Institute for Advanced Study in Princeton and a winner of the Fields Medal in 2018, isn’t currently interested in using A.I., but he is keen on talking about it. “I want my students to realize that the field they’re in is going to change a lot,” he said in an interview last year. He recently added by email: “I am not opposed to thoughtful and deliberate use of technology to support our human understanding. But I strongly believe that mindfulness about the way we use it is essential.”

In February, Dr. Avigad attended a workshop about “machine-assisted proofs” at the Institute for Pure and Applied Mathematics, on the campus of the University of California, Los Angeles. (He visited the Euclid portrait on the final day of the workshop.) The gathering drew an atypical mix of mathematicians and computer scientists. “It feels consequential,” said Terence Tao, a mathematician at the university, winner of a Fields Medal in 2006 and the workshop’s lead organizer.

Dr. Tao noted that only in the last couple years have mathematicians started worrying about A.I.’s potential threats, whether to mathematical aesthetics or to themselves. That prominent community members are now broaching the issues and exploring the potential “kind of breaks the taboo,” he said.

One conspicuous workshop attendee sat in the front row: a trapezoidal box named “raise-hand robot” that emitted a mechanical murmur and lifted its hand whenever an online participant had a question. “It helps if robots are cute and nonthreatening,” Dr. Tao said.

Bring on the “proof whiners”

These days there is no shortage of gadgetry for optimizing our lives — diet, sleep, exercise. “We like to attach stuff to ourselves to make it a little easier to get things right,” Jordan Ellenberg, a mathematician at the University of Wisconsin-Madison, said during a workshop break. A.I. gadgetry might do the same for mathematics, he added: “It’s very clear that the question is, What can machines do for us, not what will machines do to us.”

One math gadget is called a proof assistant, or interactive theorem prover. (“Automath” was an early incarnation in the 1960s.) Step-by-step, a mathematician translates a proof into code; then a software program checks whether the reasoning is correct. Verifications accumulate in a library, a dynamic canonical reference that others can consult. This type of formalization provides a foundation for mathematics today, said Dr. Avigad, who is the director of the Hoskinson Center for Formal Mathematics (funded by the crypto entrepreneur Charles Hoskinson), “in just the same way that Euclid was trying to codify and provide a foundation for the mathematics of his time.”

Of late, the open-source proof assistant system Lean is attracting attention. Developed at Microsoft by Leonardo de Moura, a computer scientist now with Amazon, Lean uses automated reasoning, which is powered by what is known as good old-fashioned artificial intelligence, or GOFAI — symbolic A.I., inspired by logic. So far the Lean community has verified an intriguing theorem about turning a sphere inside out as well as a pivotal theorem in a scheme for unifying mathematical realms, among other gambits.

But a proof assistant also has drawbacks: It often complains that it does not understand the definitions, axioms or reasoning steps entered by the mathematician, and for this it has been called a “proof whiner.” All that whining can make research cumbersome. But Heather Macbeth, a mathematician at Fordham University, said that this same feature — providing line-by-line feedback — also makes the systems useful for teaching.

In the spring, Dr. Macbeth designed a “bilingual” course: She translated every problem presented on the blackboard into Lean code in the lecture notes, and students submitted solutions to homework problems both in Lean and prose. “It gave them confidence,” Dr. Macbeth said, because they received instant feedback on when the proof was finished and whether each step along the way was right or wrong.

Since attending the workshop, Emily Riehl, a mathematician at Johns Hopkins University, used an experimental proof-assistant program to formalize proofs she had previously published with a co-author. By the end of a verification, she said, “I’m really, really deep into understanding the proof, way deeper than I’ve ever understood before. I’m thinking so clearly that I can explain it to a really dumb computer.”

Brute reason — but is it math?

Another automated-reasoning tool, used by Marijn Heule, a computer scientist at Carnegie Mellon University and an Amazon scholar, is what he colloquially calls “brute reasoning” (or, more technically, a Satisfiability, or SAT, solver). By merely stating, with a carefully crafted encoding, which “exotic object” you want to find, he said, a supercomputer network churns through a search space and determines whether or not that entity exists.

Just before the workshop, Dr. Heule and one of his Ph.D. students, Bernardo Subercaseaux, finalized their solution to a longstanding problem with a file that was 50 terabytes in size. Yet that file hardly compared with a result that Dr. Heule and collaborators produced in 2016: “Two-hundred-terabyte maths proof is largest ever,” a headline in Nature announced. The article went on to ask whether solving problems with such tools truly counted as math. In Dr. Heule’s view, this approach is needed “to solve problems that are beyond what humans can do.”

Another set of tools uses machine learning, which synthesizes oodles of data and detects patterns but is not good at logical, step-by-step reasoning. Google’s DeepMind designs machine-learning algorithms to tackle the likes of protein folding (AlphaFold) and winning at chess (AlphaZero). In a 2021 Nature paper, a team described their results as “advancing mathematics by guiding human intuition with A.I.”

Yuhuai “Tony” Wu, a computer scientist formerly at Google and now with a start-up in the Bay Area, has outlined a grander machine-learning goal: to “solve mathematics.” At Google, Dr. Wu explored how the large language models that empower chatbots might help with mathematics. The team used a model that was trained on internet data and then fine-tuned on a large math-rich data set, using, for instance, an online archive of math and science papers. When asked in everyday English to solve math problems, this specialized chatbot, named Minerva, was “pretty good at imitating humans,” Dr. Wu said at the workshop. The model obtained scores that were better than an average 16-year-old student on high school math exams.

Ultimately, Dr. Wu said, he envisioned an “automated mathematician” that has “the capability of solving a mathematical theorem all by itself.”

Mathematics as a litmus test

Mathematicians have responded to these disruptions with varying levels of concern.

Michael Harris, at Columbia University, expresses qualms in his “Silicon Reckoner” Substack. He is troubled by the potentially conflicting goals and values of research mathematics and the tech and defense industries. In a recent newsletter, he noted that one speaker at a workshop, “A.I. to Assist Mathematical Reasoning,” organized by the National Academies of Sciences, was a representative from Booz Allen Hamilton, a government contractor for intelligence agencies and the military.

Dr. Harris lamented the lack of discussion about the larger implications of A.I. on mathematical research, particularly “when contrasted with the very lively conversation going on” about the technology “pretty much everywhere except mathematics.”

Geordie Williamson, of the University of Sydney and a DeepMind collaborator, spoke at the N.A.S. gathering and encouraged mathematicians and computer scientists to be more involved in such conversations. At the workshop in Los Angeles, he opened his talk with a line adapted from “You and the Atom Bomb,” a 1945 essay by George Orwell. “Given how likely we all are to be profoundly affected within the next five years,” Dr. Williamson said, “deep learning has not roused as much discussion as might have been expected.”

Dr. Williamson considers mathematics a litmus test of what machine learning can or cannot do. Reasoning is quintessential to the mathematical process, and it is the crucial unsolved problem of machine learning.

Early during Dr. Williamson’s DeepMind collaboration, the team found a simple neural net that predicted “a quantity in mathematics that I cared deeply about,” he said in an interview, and it did so “ridiculously accurately.” Dr. Williamson tried hard to understand why — that would be the makings of a theorem — but could not. Neither could anybody at DeepMind. Like the ancient geometer Euclid, the neural net had somehow intuitively discerned a mathematical truth, but the logical “why” of it was far from obvious.

At the Los Angeles workshop, a prominent theme was how to combine the intuitive and the logical. If A.I. could do both at the same time, all bets would be off.

But, Dr. Williamson observed, there is scant motivation to understand the black box that machine learning presents. “It’s the hackiness culture in tech, where if it works most of the time, that’s great,” he said — but that scenario leaves mathematicians dissatisfied.

He added that trying to understand what goes on inside a neural net raises “fascinating mathematical questions,” and that finding answers presents an opportunity for mathematicians “to contribute meaningfully to the world.”

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort
How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks?

Smiling Demon
Jun 16, 2013
That article on AI/math is a terrible puff piece. Computing has always had effects on mathematics, there used to be huge books of integrals that disappeared when it became easier to just have an algorithm do the work. A problem in graph theory, the 4 colour theorem, was a well known problem first solved via a computed proof. Proof systems are well known and not really related to the kind of AI that gets the hype today.

Doctor Malaver posted:

How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks?

It isn't magic. There is no reason to try and use the same network to do distinct things, that is just a waste. You could train a network to predict mole/skin cancer relations, prediction is basically what machine learning is about. There are real physical limits to how useful this is though, 'actually has skin cancer' and 'looks like skin cancer' don't always overlap neatly.

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.
There is some benefit to using machines to do this sort of thing because they can notice correlations that doctors wouldn't. For better or worse, humans are bound by some kind of logic while computers can make wild connections that make no intuitive sense (but might turn out to be true anyway).

Clarste fucked around with this message at 05:29 on Jul 7, 2023

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Smiling Demon posted:

It isn't magic. There is no reason to try and use the same network to do distinct things, that is just a waste.

I know there is no practical reason to do that, my question is theoretical. Not a programmer, but I think it would work. The nodes and weighs are structured for one task, but after feeding them data for a different task long enough, they should restructure. And maybe retain some leftover connections from the first task.

The Islamic Shock
Apr 8, 2021
I've been playing around with Google Bard lately.

Prompt: When was the last year the Republican Party won the US House, Senate and Presidency, and did not immediately cut taxes for the rich?
Answer: The last time the Republican Party won the US House, Senate and Presidency was in 2017. In December of that year, they passed a major tax cut bill that slashed taxes for corporations and wealthy individuals. The bill was signed into law by President Donald Trump.

There have been a few other times in recent history when Republicans have controlled all three branches of government, but they did not immediately cut taxes for the rich. For example, in 1994, Republicans won control of the House and Senate for the first time in 40 years. However, they did not pass a major tax cut bill until 1997.

In 2000, Republicans won the presidency and retained control of the House and Senate. However, they did not pass a major tax cut bill until 2001.

So, the answer to your question is that the Republican Party has never won the US House, Senate and Presidency and not immediately cut taxes for the rich. The closest they came was in 1994, but they waited three years before passing a major tax cut bill.

lulz. I've thrown lots of poo poo like that at Bard (ask it what kind of economic system Jesus would like) and good Lord the conservatives are going to be screaming liberal bias if they haven't started already.

Colonel Taint
Mar 14, 2004


Doctor Malaver posted:

I know there is no practical reason to do that, my question is theoretical. Not a programmer, but I think it would work. The nodes and weighs are structured for one task, but after feeding them data for a different task long enough, they should restructure. And maybe retain some leftover connections from the first task.


Assuming that skin cancer can be identified visually, there actually is a benefit of re-using an existing network and updating with new training data. The technical term for it is transfer learning and it works because a lot of the lower-level layers of the network are generally still relevant - for example some layers may be just picking up edges and simple shapes, which should still be relevant for the new task. With transfer learning, there's typically much less new data required to re-train the network. So you're basically correct.

Colonel Taint fucked around with this message at 14:07 on Jul 7, 2023

SaTaMaS
Apr 18, 2003

Clarste posted:

There is some benefit to using machines to do this sort of thing because they can notice correlations that doctors wouldn't. For better or worse, humans are bound by some kind of logic while computers can make wild connections that make no intuitive sense (but might turn out to be true anyway).

TBF humans can make wild connections that make no logical sense and we call it intuition

Evis
Feb 28, 2007
Flying Spaghetti Monster

SaTaMaS posted:

TBF humans can make wild connections that make no logical sense and we call it intuition

Humans can also dream up some pretty wild connections that don’t actually exist

Xand_Man
Mar 2, 2004

If what you say is true
Wutang might be dangerous


SaTaMaS posted:

TBF humans can make wild connections that make no logical sense and we call it pareidolia

Clarste
Apr 15, 2013

Just how many mistakes have you suffered on the way here?

An uncountable number, to be sure.

SaTaMaS posted:

TBF humans can make wild connections that make no logical sense and we call it intuition

No the problem there is drawing logical connections that are wrong because we're dumb. Not finding illogical connections that are correct.

Bar Ran Dun
Jan 22, 2006




More fun is drawing complex correct logical conclusions and then being unable to communicate then clearly to others and to explain them to others because of a learning disability.

Doctor Malaver
May 23, 2007

Ce qui s'est passé t'a rendu plus fort

Colonel Taint posted:

Assuming that skin cancer can be identified visually, there actually is a benefit of re-using an existing network and updating with new training data. The technical term for it is transfer learning and it works because a lot of the lower-level layers of the network are generally still relevant - for example some layers may be just picking up edges and simple shapes, which should still be relevant for the new task. With transfer learning, there's typically much less new data required to re-train the network. So you're basically correct.

That reminds me of athletes who switched sports. You would assume that to get the best results as a soccer player, you want to start as early as possible and stick to it. But Zlatan Ibrahimović for instance had trained in martial arts as a kid and that gave him a specific edge. Or someone who comes to software development from an unrelated field sometimes brings to the table stuff that a CS major doesn't. Is it possible for a "transferred" neural network to have such an edge, other than needing less training data?

SaTaMaS
Apr 18, 2003

Doctor Malaver posted:

That reminds me of athletes who switched sports. You would assume that to get the best results as a soccer player, you want to start as early as possible and stick to it. But Zlatan Ibrahimović for instance had trained in martial arts as a kid and that gave him a specific edge. Or someone who comes to software development from an unrelated field sometimes brings to the table stuff that a CS major doesn't. Is it possible for a "transferred" neural network to have such an edge, other than needing less training data?

That's actually kind of a counter-example. Transfer learning is a great way to produce general lower-level layers of the network to save time, but you can get better results if you retrain and specialize those lower levels to your specific case. The edge for a transferred neural network would be that it's pretty good for a wider range of cases than the person using it has training data for.

GlyphGryph
Jun 23, 2013

Down came the glitches and burned us in ditches and we slept after eating our dead.

Doctor Malaver posted:

How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks?

I actually worked on a cancer detection neural network several years back. I imagine the newest tech is even more impressive than what we were running with, and what we had was already pretty good - we leaned on erring towards the side of flagging possible cancer so they could be highlighted as points of interests when the images were viewed by professionals, more an assistive tool than something that claimed to do all the work on its own.

I don't think you can confidelty identify moles as cancer visually in any meaningful way to begin with, but you can certainly use it to spot possible troublesome items and then to do deeper analysis on those slides using an additional network trained for that.

Based on the work we did, I don't think training on faces beforehand would have helped? Interpolating faces doesn't seem like a useful component of detecting cancers.

GlyphGryph fucked around with this message at 01:48 on Jul 11, 2023

Adbot
ADBOT LOVES YOU

Evis
Feb 28, 2007
Flying Spaghetti Monster

GlyphGryph posted:

Based on the work we did, I don't think training on faces beforehand would have helped? Interpolating faces doesn't seem like a useful component of detecting cancers.

If there are areas of the face where specific cancers are more likely to occur, then maybe that training would help set a better baseline of what looks “normal”?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply