|
Bar Ran Dun posted:Here’s a general example before we had controls for burners in a boiler there was a person (a general intelligence) who would monitor the feedback from the boiler and would adjust the burners by changing burners / tips/ pressures.
|
# ? Jul 1, 2023 00:39 |
|
|
# ? Jun 13, 2024 04:16 |
|
It rather does follow from first principles if conscious arises from feed back loops.
|
# ? Jul 1, 2023 00:52 |
|
Bar Ran Dun posted:It rather does follow from first principles if conscious arises from feed back loops.
|
# ? Jul 1, 2023 00:56 |
|
SubG posted:Evolution is a feedback loop. Fish arose from evolution. Birds arose from evolution. That doesn't imply that fish can fly or birds can swim. No they example of evolution does support that a feed back loop that isn’t even a general intelligence generated controls for flight (several times) and swimming (again several times). I was only arguing that it’s for GI and AGI , you’ve managed to pick an example that suggests potentially all feedback loops could generated controllers given enough iteration.
|
# ? Jul 1, 2023 01:03 |
|
Bar Ran Dun posted:No they example of evolution does support that a feed back loop that isn’t even a general intelligence generated controls for flight (several times) and swimming (again several times). I was only arguing that it’s for GI and AGI , you’ve managed to pick an example that suggests potentially all feedback loops could generated controllers given enough iteration.
|
# ? Jul 1, 2023 01:07 |
|
SubG posted:Go ahead and do so. This is extremely straightforward Subg. You perceive output signals with senses. You can make changes to systems you interact with to affect those output signals. That’s a controller, that’s what a feed back control is. General intelligences are controls. It’s not all they are, but it’s an essential characteristic. For an AGI anything you would be able to give a feed back signal from, a means to affect, and choose a goal / set point for it could be a controller for.
|
# ? Jul 1, 2023 01:37 |
|
Bar Ran Dun posted:This is extremely straightforward Subg. You perceive output signals with senses. You can make changes to systems you interact with to affect those output signals.
|
# ? Jul 1, 2023 01:54 |
|
You are spinning a whole lotta wheels there. I’m only making a single very basic assertion. General intelligences (our brains) can function as imperfect universal feedback controllers therefore artificial general intelligences will be able to function as universal feedback controllers. Edit: if feed back loops are essential to consciousness.
|
# ? Jul 1, 2023 02:15 |
|
As far as I can tell, your assertion is just trivially true by definition. If you define an AGI as something that can generally solve arbitrary tasks (albeit imperfectly), then anything that can generally solve arbitrary tasks is an AGI. Now that doesn't mean just because x is an AGI, all AGIs are x. There could be AGIs that aren't x. But if x fulfills the formal requirements to be an AGI, it is one.
|
# ? Jul 1, 2023 02:18 |
|
KillHour posted:As far as I can tell, your assertion is just trivially true by definition. If you define an AGI as something that can generally solve arbitrary tasks (albeit imperfectly), then anything that can generally solve arbitrary tasks is an AGI. And in either case it doesn't demonstrate that you can construct the solution, just that it exists. Like consider the set of all things that have ever been solved by humans. It is tautologically true that humans can solve all the problems in that set. But that doesn't mean that any given human selected at random can solve all of the problems in the set. It doesn't mean that any arbitrary subset of humans can solve all the problems in the set. It doesn't mean that there's any solution better than just brute force throwing people at the problems until they're all solved. If you have a process for generating humans (I mean beyond the one humans have always used) it doesn't mean that there's some mechanism by which you could generate a human capable of solving all of the problems in the set.
|
# ? Jul 1, 2023 02:27 |
|
Being a controller isn’t solving a task. It’s adjusting towards a set point. Think of a pipe with a flow rate of ten gallons per second. A feed back controller is a device that receives a signal from the output. And then causes an affect in the input to changes the out put towards a set point. So if we have a set point of 50 gallons per second the controller receives the signal of 10, and then controls a valve to open allowing more flow through, increasing the flow rate through the pipe to start rising towards the set point. You can do this with with your brain. You look at the flow meter, oh it’s not fifty, then open the globe valve more. Your brain can be a controller for things it can receive a feed back signal from. This is an essential ability of minds if feed back loops are a requirement for consciousness to exist. A artificial general intelligence is going to be able to be a controller for anything we can give it a digital feedback signal from.
|
# ? Jul 1, 2023 02:33 |
|
Bar Ran Dun posted:A artificial general intelligence is going to be able to be a controller for anything we can give it a digital feedback signal from.
|
# ? Jul 1, 2023 02:38 |
|
A neural network can already do that. That's literally what they do/are.
|
# ? Jul 1, 2023 02:38 |
|
KillHour posted:A neural network can already do that. That's literally what they do/are.
|
# ? Jul 1, 2023 02:41 |
|
KillHour posted:A neural network can already do that. That's literally what they do/are. Yes for solved (fully described) systems and for models of systems. You can train as neural network to be the automation for a power plant (a fully described system). You can’t train one to control inflation (a complex system that can’t be fully described).
|
# ? Jul 1, 2023 02:43 |
|
Bar Ran Dun posted:Yes for solved (fully described) systems and for models of systems. What was your original question? Or were you just offering this as an observation?
|
# ? Jul 1, 2023 02:49 |
|
SubG posted:Neural networks have solved the halting problem? Cool. I'd love a link to the paper. You keep talking about a lot of different things so I'm really not sure you understand what they mean. A neural net is a universal feedback controller. That's what training is - it does a thing and an evaluation metric adjusts it based on the output. It's universal in the sense that it can be trained to do any task that could have a good evaluation metric created for. It won't necessarily be perfect at it and it may require an insane amount of memory, but that's not disqualifying. You brought up NP hard. NP hard does not mean impossible. It means slow. I can write an algorithm that solves the traveling salesman problem in a few lines of code. It can't be solved both perfectly and efficiently, but we already clarified that being perfect is not a requirement and nobody said anything about efficiency. The halting problem is similarly unrelated. If something is undecidable, a universal feedback controller isn't going to magically make it decidable. The answer to the halting problem is that it's undecidable in the same way the answer to 1/0 is undefined. "Universal" means the same thing as "general" - that it's not constrained to a small set of predetermined capabilities. It doesn't mean "omnipotent." Bar Ran Dun posted:Yes for solved (fully described) systems and for models of systems. You can absolutely train one to control inflation. You just have to give it full control over the economy and tell it what worked and what didn't and after about a million horrific economic crashes, it will probably be pretty good. It's not practical to do that, but that's irrelevant. People also suck at controlling inflation for pretty much the same reason. We just probably suck less than a neural network hooked up to the stock market. KillHour fucked around with this message at 03:04 on Jul 1, 2023 |
# ? Jul 1, 2023 03:01 |
|
SubG posted:Okay, sure. If we define AGI as something like "something that can solve any already-solved problem" and "universal controller" as "a controller that can solve any solved control problem" then yes, having an AGI implies you have a universal controller. No that was about neural networks. I think AGI will be a controller for any system including the complex ones.
|
# ? Jul 1, 2023 03:08 |
|
KillHour posted:"Universal" means the same thing as "general" - that it's not constrained to a small set of predetermined capabilities. It doesn't mean "omnipotent."
|
# ? Jul 1, 2023 03:32 |
|
NYTs dropped another AGI article https://www.nytimes.com/2023/06/30/opinion/artificial-intelligence-danger.html?smid=nytcore-ios-share&referringSource=articleShare This one deals with neoliberalism.
|
# ? Jul 2, 2023 23:48 |
|
Bar Ran Dun posted:NYTs dropped another AGI article Paywall.
|
# ? Jul 3, 2023 03:36 |
|
Another in the Times on AI and math. Sounds like the math models are coming. https://www.nytimes.com/2023/07/02/science/ai-mathematics-machine-learning.html?smid=nytcore-ios-share&referringSource=articleShare Many of us will have used tools that automate calculations even complex stuff. Apparently the models are working towards automating the math reasoning side. Gynovore posted:Paywall. Last I checked it was discouraged to copy paste the articles or to tell folks about how to bypass pay walls. If anyone has a problem with me posting the text just let me know, I’m going to for both for gynovore.
|
# ? Jul 3, 2023 04:08 |
|
nyts posted:
|
# ? Jul 3, 2023 04:24 |
|
Bar Ran Dun posted:Last I checked it was discouraged to copy paste the articles or to tell folks about how to bypass pay walls. Thanks. AFAIK the former is OK.
|
# ? Jul 3, 2023 04:39 |
|
“NYTs” posted:
|
# ? Jul 3, 2023 05:10 |
|
How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks?
|
# ? Jul 6, 2023 22:40 |
|
That article on AI/math is a terrible puff piece. Computing has always had effects on mathematics, there used to be huge books of integrals that disappeared when it became easier to just have an algorithm do the work. A problem in graph theory, the 4 colour theorem, was a well known problem first solved via a computed proof. Proof systems are well known and not really related to the kind of AI that gets the hype today. Doctor Malaver posted:How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks? It isn't magic. There is no reason to try and use the same network to do distinct things, that is just a waste. You could train a network to predict mole/skin cancer relations, prediction is basically what machine learning is about. There are real physical limits to how useful this is though, 'actually has skin cancer' and 'looks like skin cancer' don't always overlap neatly.
|
# ? Jul 7, 2023 05:19 |
|
There is some benefit to using machines to do this sort of thing because they can notice correlations that doctors wouldn't. For better or worse, humans are bound by some kind of logic while computers can make wild connections that make no intuitive sense (but might turn out to be true anyway).
Clarste fucked around with this message at 05:29 on Jul 7, 2023 |
# ? Jul 7, 2023 05:26 |
|
Smiling Demon posted:It isn't magic. There is no reason to try and use the same network to do distinct things, that is just a waste. I know there is no practical reason to do that, my question is theoretical. Not a programmer, but I think it would work. The nodes and weighs are structured for one task, but after feeding them data for a different task long enough, they should restructure. And maybe retain some leftover connections from the first task.
|
# ? Jul 7, 2023 09:26 |
|
I've been playing around with Google Bard lately. Prompt: When was the last year the Republican Party won the US House, Senate and Presidency, and did not immediately cut taxes for the rich? Answer: The last time the Republican Party won the US House, Senate and Presidency was in 2017. In December of that year, they passed a major tax cut bill that slashed taxes for corporations and wealthy individuals. The bill was signed into law by President Donald Trump. There have been a few other times in recent history when Republicans have controlled all three branches of government, but they did not immediately cut taxes for the rich. For example, in 1994, Republicans won control of the House and Senate for the first time in 40 years. However, they did not pass a major tax cut bill until 1997. In 2000, Republicans won the presidency and retained control of the House and Senate. However, they did not pass a major tax cut bill until 2001. So, the answer to your question is that the Republican Party has never won the US House, Senate and Presidency and not immediately cut taxes for the rich. The closest they came was in 1994, but they waited three years before passing a major tax cut bill. lulz. I've thrown lots of poo poo like that at Bard (ask it what kind of economic system Jesus would like) and good Lord the conservatives are going to be screaming liberal bias if they haven't started already.
|
# ? Jul 7, 2023 11:23 |
|
Doctor Malaver posted:I know there is no practical reason to do that, my question is theoretical. Not a programmer, but I think it would work. The nodes and weighs are structured for one task, but after feeding them data for a different task long enough, they should restructure. And maybe retain some leftover connections from the first task. Assuming that skin cancer can be identified visually, there actually is a benefit of re-using an existing network and updating with new training data. The technical term for it is transfer learning and it works because a lot of the lower-level layers of the network are generally still relevant - for example some layers may be just picking up edges and simple shapes, which should still be relevant for the new task. With transfer learning, there's typically much less new data required to re-train the network. So you're basically correct. Colonel Taint fucked around with this message at 14:07 on Jul 7, 2023 |
# ? Jul 7, 2023 14:02 |
|
Clarste posted:There is some benefit to using machines to do this sort of thing because they can notice correlations that doctors wouldn't. For better or worse, humans are bound by some kind of logic while computers can make wild connections that make no intuitive sense (but might turn out to be true anyway). TBF humans can make wild connections that make no logical sense and we call it intuition
|
# ? Jul 7, 2023 14:06 |
|
SaTaMaS posted:TBF humans can make wild connections that make no logical sense and we call it intuition Humans can also dream up some pretty wild connections that don’t actually exist
|
# ? Jul 7, 2023 23:06 |
|
SaTaMaS posted:TBF humans can make wild connections that make no logical sense and we call it pareidolia
|
# ? Jul 7, 2023 23:14 |
|
SaTaMaS posted:TBF humans can make wild connections that make no logical sense and we call it intuition No the problem there is drawing logical connections that are wrong because we're dumb. Not finding illogical connections that are correct.
|
# ? Jul 9, 2023 06:53 |
|
More fun is drawing complex correct logical conclusions and then being unable to communicate then clearly to others and to explain them to others because of a learning disability.
|
# ? Jul 9, 2023 07:28 |
|
Colonel Taint posted:Assuming that skin cancer can be identified visually, there actually is a benefit of re-using an existing network and updating with new training data. The technical term for it is transfer learning and it works because a lot of the lower-level layers of the network are generally still relevant - for example some layers may be just picking up edges and simple shapes, which should still be relevant for the new task. With transfer learning, there's typically much less new data required to re-train the network. So you're basically correct. That reminds me of athletes who switched sports. You would assume that to get the best results as a soccer player, you want to start as early as possible and stick to it. But Zlatan Ibrahimović for instance had trained in martial arts as a kid and that gave him a specific edge. Or someone who comes to software development from an unrelated field sometimes brings to the table stuff that a CS major doesn't. Is it possible for a "transferred" neural network to have such an edge, other than needing less training data?
|
# ? Jul 10, 2023 18:59 |
|
Doctor Malaver posted:That reminds me of athletes who switched sports. You would assume that to get the best results as a soccer player, you want to start as early as possible and stick to it. But Zlatan Ibrahimović for instance had trained in martial arts as a kid and that gave him a specific edge. Or someone who comes to software development from an unrelated field sometimes brings to the table stuff that a CS major doesn't. Is it possible for a "transferred" neural network to have such an edge, other than needing less training data? That's actually kind of a counter-example. Transfer learning is a great way to produce general lower-level layers of the network to save time, but you can get better results if you retrain and specialize those lower levels to your specific case. The edge for a transferred neural network would be that it's pretty good for a wider range of cases than the person using it has training data for.
|
# ? Jul 10, 2023 19:48 |
|
Doctor Malaver posted:How flexible are neural networks? If you fed one XY photos of human faces to teach it to draw made up people, could you then feed it XY photos of moles to teach it to identify skin cancer? I don't mean a different instance of that network but the very same one. And would learning moles mess up its weights so that it would lose its previous capability, or could it perform both tasks? I actually worked on a cancer detection neural network several years back. I imagine the newest tech is even more impressive than what we were running with, and what we had was already pretty good - we leaned on erring towards the side of flagging possible cancer so they could be highlighted as points of interests when the images were viewed by professionals, more an assistive tool than something that claimed to do all the work on its own. I don't think you can confidelty identify moles as cancer visually in any meaningful way to begin with, but you can certainly use it to spot possible troublesome items and then to do deeper analysis on those slides using an additional network trained for that. Based on the work we did, I don't think training on faces beforehand would have helped? Interpolating faces doesn't seem like a useful component of detecting cancers. GlyphGryph fucked around with this message at 01:48 on Jul 11, 2023 |
# ? Jul 11, 2023 01:45 |
|
|
# ? Jun 13, 2024 04:16 |
|
GlyphGryph posted:Based on the work we did, I don't think training on faces beforehand would have helped? Interpolating faces doesn't seem like a useful component of detecting cancers. If there are areas of the face where specific cancers are more likely to occur, then maybe that training would help set a better baseline of what looks “normal”?
|
# ? Jul 11, 2023 04:38 |