Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
xiw
Sep 25, 2011

i wake up at night
night action madness nightmares
maybe i am scum

Cpig Haiku contest 2020 winner
You can tell this is 2000-era paper from the XML obsession.

quote:

At present, Chrystalyn is simply an idea. As with Elisson and Flare, it will probably take at least a month of thought to translate the idea into a design, then another month if I need to publish it on the Web.

Adbot
ADBOT LOVES YOU

Peel
Dec 3, 2007

The problem with taking a 'Yud is irredeemably crazy' tack is that he isn't, he's an SF blogger with a grandiose self-image. Point him at Yud's lack of concrete results or involvement in actual AI research, the assumptions his futurology and social speculation relies on, and the old hat his philosophy is so he has a more realistic view. If he's getting into philosophy maybe point him at Hume so he has a point of reference for talking to traditional philosophers.

There's no reason someone's intellectual development couldn't go via Less Wrong so long as they don't buy into the cult and stagnate.

SolTerrasa posted:

the Singularity Institute (which folded amid embezzlement controversy, so he founded MIRI).

You can't just drop that and then not expand on it.

SolTerrasa
Sep 2, 2011

Peel posted:

The problem with taking a 'Yud is irredeemably crazy' tack is that he isn't, he's an SF blogger with a grandiose self-image. Point him at Yud's lack of concrete results or involvement in actual AI research, the assumptions his futurology and social speculation relies on, and the old hat his philosophy is so he has a more realistic view. If he's getting into philosophy maybe point him at Hume so he has a point of reference for talking to traditional philosophers.

There's no reason someone's intellectual development couldn't go via Less Wrong so long as they don't buy into the cult and stagnate.

I do actually think that Yud is irredeemably nuts (not crazy, just fifteen degrees shifted from reality), but I don't think you're wrong. My intellectual development went via lesswrong, after all. But it'll be much easier to convince Catbug's philosopher friend that Yud is a hack than that he's crazy. Try this: Yudkowsky has had grandiose ideas since he was 17, and in those two decades he has implemented zero of them. He is, at best, a popularizer of rationalist principles, though he conflates them with his own singularity-seeking views to an extent that should be alarming. The subset of his work which is well-done is not original; the subset of his work which is original is panned unanimously among recognized experts.

quote:

You can't just drop that and then not expand on it.

I first read it here:

http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/

A GiveWell staffer says don't donate to SI, citing a theft of 20% of their operating budget. Less than a year later, SI sells its only asset (the brand of the Singularity Summit) to the now existing Singularity University. Less than a year after that, MIRI forms with many of the same people (minus the thief).

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

SolTerrasa posted:

I do actually think that Yud is irredeemably nuts (not crazy, just fifteen degrees shifted from reality), but I don't think you're wrong. My intellectual development went via lesswrong, after all. But it'll be much easier to convince Catbug's philosopher friend that Yud is a hack than that he's crazy. Try this: Yudkowsky has had grandiose ideas since he was 17, and in those two decades he has implemented zero of them. He is, at best, a popularizer of rationalist principles, though he conflates them with his own singularity-seeking views to an extent that should be alarming. The subset of his work which is well-done is not original; the subset of his work which is original is panned unanimously among recognized experts.

See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him.

SolTerrasa
Sep 2, 2011

Pavlov posted:

See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him.

That is something you won't find outside of technical matters. Yudkowsky is internally consistent at any given time. It's one of his admirable qualities. He's widely variable over time, but I actually don't consider that a failure.

One thing you will find is that his fear of death overrides his reason. He believes in a 6% chance that a body which is cryonically frozen today will be viable in the future. You will not find a biologist (who isn't profiting from cryonics) who will come within an order of magnitude of that confidence (most will tell you 0.0%), but Yudkowsky regards it as such an obvious choice that it would be irrational NOT to freeze your dead body. He does not update that belief on evidence, like the time that an Alcor employee wrote a book about systemic mistreatment of the corpses in their care.

But I mean, really, that's not even that bad. I'm scared of death too, and if I were a little less sane and a little more self-important, maybe I'd believe what he does. Being a rationalist isn't that bad. Being a Yudkowsky cultist definitely is.

Anticheese
Feb 13, 2008

$60,000,000 sexbot
:rodimus:

ALL-PRO SEXMAN posted:

What if the moon was made of ribs?

Then there would be approximately 2.784x10^26 calories of meat orbiting the Earth.

SolTerrasa
Sep 2, 2011

that thing I posted earlier posted:

A merely transhuman AI (as opposed to a Power) might have trouble renting a nanotechnology lab without attracting attention.  So, if the Singularity Institute has the money, we should have a nanotechnology lab in our basement.  The remarkable thing about nanotechnology, circa 2000, is how cheap the basic equipment is.  Having a nano lab is likely to be considerably easier than having our own supercomputer.  Circa 2000, a pocket nanotech lab would probably consist of a scanning tunnelling microscope, a DNA sequencer, and a protein synthesis machine.

...

Given all those devices, I would expect diamondoid drextech (SolTerrasa note: those are meaningless sounds) - full-scale molecular nanotechnology - to take a couple of days; a couple of hours minimum, a couple of weeks maximum.

I love futurists.

A Wizard of Goatse
Dec 14, 2014

Pavlov posted:

See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him.

Guy isn't a hypocrite or a huckster or an idiot, he's just a third-rate theologian whose proof of the existence of computergod amounts to that we had abacuses in 1900 and now we have iPhones; therefore in another hundred years or less everything we know about physics or observed reality will be so much tribal superstition and we'll have pulled off a Civilization tech win where all your dreams will, naturally, come true

It's not inconsistent, it's just unsupported and insupportable because it relies on a sneering equivalence between any observations available to modern man and the witch doctor blaming evil spirits for making the harvest fail, and an interpretation of Bayes theory very close to that if you can phrase an argument in the form of a sufficiently large made-up number then it must be true.

sat on my keys!
Oct 2, 2014

SolTerrasa posted:

I love futurists.

Where is the $5 million e-beam writer? Where are all the things to do litho? Is all of the God-AI's nanotech going to be wetware?

SolTerrasa
Sep 2, 2011

bartlebyshop posted:

Where is the $5 million e-beam writer? Where are all the things to do litho? Is all of the God-AI's nanotech going to be wetware?

Can you imagine if he'd heard of 3d printers in 2000?

More seriously though I love all these :words: he wrote to obscure how terrible he is at working. Seriously, if you haven't yet, read his technical timeline. This is a man who has seriously debated whether he needs a devoted memeticist for his new programming language, or whether simply creating it will be enough to ensure it gets more popular than python and Java (no poo poo). He has deeply considered whether v1 of the language will be spectacularly brilliant enough to mean that a port of the Linux Kernel is inevitable, or if it will have to wait until v2.

And then he never actually got around to MAKING it.

Here is the project page, fyi: http://flarelang.sourceforge.net

sat on my keys!
Oct 2, 2014

SolTerrasa posted:

Can you imagine if he'd heard of 3d printers in 2000?

More seriously though I love all these :words: he wrote to obscure how terrible he is at working. Seriously, if you haven't yet, read his technical timeline. This is a man who has seriously debated whether he needs a devoted memeticist for his new programming language, or whether simply creating it will be enough to ensure it gets more popular than python and Java (no poo poo). He has deeply considered whether v1 of the language will be spectacularly brilliant enough to mean that a port of the Linux Kernel is inevitable, or if it will have to wait until v2.

And then he never actually got around to MAKING it.

Here is the project page, fyi: http://flarelang.sourceforge.net

First, Sourceforge is instant lol.

Second, it's probably just like the AI chatbox thing. Once he got past the first three or four of his acolytes who loved it and hit someone with domain expertise who told him it was stupid, he gave up immediately.

Curvature of Earth
Sep 9, 2011

Projected cost of
invading Canada:
$900

SolTerrasa posted:

diamondoid drextech (SolTerrasa note: those are meaningless sounds)

Correction: they're only meaningless if you're an actual scientist. But if your sole reference point for science is fiction, they mean, "I've read Neal Stephenson's The Diamond Age" and "I've read Eric Drexler's breatheless predictions about nanotech".

While we're on the subject, Drexler and all the other typical nanotech evangelists are wrong about practically everything.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
Re: Yudkowsky failing as a rationalist - I still think that one of the most immediately, obviously, atrocious examples of this is Torture vs. Dust Specks. I remember reading it as the exact point where I thought "yeah, better take everything else I read around here with a grain of salt". To top it off, Yudkowsky does at least profess to have a proper concept of "huh, at this point my conclusions are so absurd, it's easier for me to believe I just failed to account for something" -- he just fails to act by it.

I can't help but notice that this is a recurring trend. Yudkowsky will outline a sociological / epistemological / what-have-you issue and provide a nice catchy explanation for it, but will gladly and blindly walk into the very same traps that he outlined when his own intuitions lead him to them. When not indulging in them, Less Wrong is actually a neat source for Things Not To Do (My favorite off the top of my head is When None Dare Urge Restraint).

Triple Elation fucked around with this message at 17:17 on Jan 28, 2015

90s Cringe Rock
Nov 29, 2006
:gay:

SolTerrasa posted:

I really cannot express how fantastic this link is without posting the whole drat thing, but here are some choice quotes.

quote:

"The Plan to Singularity" is a concrete visualization of the technologies, efforts, resources, and actions required to reach the Singularity. Its purpose is to assist in the navigation of the possible futures, to solidify our mental speculations into positive goals, to explain how the Singularity can be reached, and to propose the creation of an institution for doing so.

...

May you have an enjoyable, intriguing, and Singularity-promoting read.

--Eliezer S. Yudkowsky, Navigator.
I'm Eliezer S. Yudkowsky: author, dreamweaver, visionary, plus programmer. You are about to enter the world of my rationality; you are now entering my Singularity.

I know it's low-hanging fruit, but drat.

Edit: "But that pretense of Vulcan logic, where you think you're just going to compute everything correctly once you've got one or two abstract insights—that doesn't work in real life either." -- Eliezer Yudkowsky

90s Cringe Rock fucked around with this message at 11:45 on Jan 28, 2015

The Time Dissolver
Nov 7, 2012

Are you a good person?

chrisoya posted:

I'm Eliezer S. Yudkowsky: author, dreamweaver, visionary, plus programmer. You are about to enter the world of my rationality; you are now entering my Singularity.

This entire thread I've been trying to pin down who Yud reminds me of and this is exactly it, THANK YOU.

Dr Cheeto
Mar 2, 2013
Wretched Harp

The Time Dissolver posted:

This entire thread I've been trying to pin down who Yud reminds me of and this is exactly it, THANK YOU.

Help a poor dumb goon out, I don't understand the reference

Wolfsbane
Jul 29, 2009

What time is it, Eccles?

http://www.videobash.com/video_show/garth-marenghi-s-darkplace-intro-1047472

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Right, the loving dust specks. That might be a good thing to throw at my guy. I might be missing something with the When None Dare Urge Restraint though. Looks to me like he's just saying "People were scared after 9/11, and this lead to Iraq, which was dumb." Except he does it with an extra helping of "But I totally called it though :smug:."

Dr Cheeto
Mar 2, 2013
Wretched Harp
Does he ever even update his priors? Like, for all the sloppy blowjobs he gives Baye's he seems to be pretty bad at taking advantage of its greatest strength.

Has he actually stated a disdain for experiments and such (you know, the meat and potatoes of empirical science), or is it just implied by his reliance on pulling numbers out of his rear end and his allergy to anything approximating work?

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

Pavlov posted:

I might be missing something with the When None Dare Urge Restraint though. Looks to me like he's just saying "People were scared after 9/11, and this lead to Iraq, which was dumb." Except he does it with an extra helping of "But I totally called it though :smug:."

I don't care much about the specific example he gives, I'm talking about the general notion:

Yudkowsky posted:

[..] just as the vast majority of all complex statements are untrue, the vast majority of negative things you can say about anyone, even the worst person in the world, are untrue. [..] It is just too dangerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things. [..] Once restraint becomes unspeakable, no matter where the discourse starts out, the level of fury and folly can only rise with time.

cf.

Orwell posted:

Almost nobody seems to feel that an opponent deserves a fair hearing or that the objective truth matters as long as you can score a neat debating point. [..] The atmosphere of hatred in which controversy is conducted blinds people to considerations of this kind. To admit that an opponent might be both honest and intelligent is felt to be intolerable. It is more immediately satisfying to shout that he is a fool or a scoundrel, or both, than to find out what he is really like.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
Oh yeah, the Flarelang stuff.

I already did a long ranty review earlier in the topic, but basically it's a hyperverbose Python that confuses a lot of pretty fundamental distinctions (i.e. between variables and their values) and operates on XML objects instead of hashtables. It doesn't really include any features that can't in some way be extrapolated from Python, XML, or the first ten minutes of a trendy language he doesn't know that well.

He's kind of verbose and confusing but from reading his feature list I think most people would take away:
- Yudkowsky knows a little bit about XML.
- But Yudkowsky doesn't know Python very well.
- And Yudkowsky definitely doesn't know any languages other than Python or XML very well. (He talks a lot about C++, but I'm not convinced he knows it.)

There's a few neat features -- a text representation of the code is generated from a syntax tree, instead of the other way around. There are also some really stupid ones, like "voice comments" -- instead of typing them, attaching sound-file explanations of what parts of code mean. Overall it's not wildly horrible for an early go at a language design but it's not good.

A representative helping of his feature document:

reinventing the lambda posted:

Quoting an expression, as in `a + b`, results in a miniature Flare codelet - that is, a fragment of FlareCode that can be passed around. Conversion to a Value results in the expression being evaluated; the expression is reevaluated each time a Value is needed. When treated as an Operand, the expression itself is passed around. The transparency idiom is similar to that for references.
He actually talks about first-class functions later -- but apparently he thinks this is somehow different? Also note that he's planning to use the typesystem to enforce correctness in his glorious Pythonesque hodgepodge. That's probably one of the things Python is worst at doing.

reinventing operator sections posted:

Level Four

Have `+` or `+ 3` evaluate to a fragment of FlareCode, such as <add><left><unbound/></left><right><num>3</num></right></add>.

Level Six

Another way to do this might be to have standard placeholder keywords; i.e. `$1 + 3` evaluates to <add><left><lambda_1/></left><right><num>3</num></right></add>.

Level Eight

`2` + `+ 3` yields `2 + 3`.

All three features are intended for use in code that will actually build up complex pieces of FlareCode through direct manipulation. There doesn't need to be any quick way to bind a lambda'd expression to a variable - this should be done using subfunctions or through self-modifying code, not as a way to make prosaic code less readable.
Hey look, a type-ignoring version of operator sections that (instead of just acting like partial applications) somehow requires introspection over the new argument (see the Level 8 example: to merge redundancy?) because that's easier than having a type system.

reinventing the callback posted:

As described in Causality (yes, you need to read the paper), there are essentially two kinds of causality; action-based and validity-based. Causality occurs when element/object B makes a computation that uses data from element/object A, and when the application logic requires the maintenance of a correspondence (mapping) between element A and element B. Validity-based causality is when a change to A causes B to be marked as having invalid state, in which case B needs to recompute before it can be reused. Action-based causality involves maintaining the correspondence in real time; when A is changed, B is notified in time for the corresponding change to be carried out on B.
This seems to be callbacks. Notice how he doesn't handle the edge case that made straight callbacks unpopular -- error handling. What happens if you ask B for some data, and A throws up all over the rug?

reinventing the goto posted:

Functions, inside a method, that can be called without adding a new method call to the stack - possibly even called within the current block. A section of FlareCode, contained inside a semistatic local variable, which is interpreted directly when called, rather than a new method call being added to the stack. ("Semistatic local variable": Having a default value, but locally overridable by assignment - parenting rules.)

Recommendations: A subfunction which has no scope at all - which is executed directly inside the current block - should not accept any arguments, since these arguments would need to be newly defined variables.
A goto... that comes with a goback! (For some reason he thinks that calls, of all things listed, are going to bottleneck the interpreter.)

Also "semistatic local" is stupid because locals are already capable of having default values -- they're called "whatever you set them to first." I don't understand what being static has to do with this.

reinventing closure posted:

The ability to pass a subfunction or a quoted expression outside its originating method call, and have any invocation of that subfunction or quoted expression refer to local variables contained in the originating method call.

Closure should occur by default so that interesting expressions can be passed as arguments.

Recommendations:
Have a way to mark a subfunction as having scope "closure block", "closure", or "block", or "none". Have a way to mark a quoted expression as having scope "closure" or "none". Both subfunctions and quoted expressions should have closure by default. In FlareSpeak, a quoted expression with no closure should be defined inside an "expression" block, rather than through the `a + b` idiom.
A subfunction or a quoted expression with closure should contain a hard reference to the current method call when referenced or created. Thus, passing a subfunction or expression with closure outside the method call will result in an error if the reference is still around when the method pops off the stack.
Pointers that die when the stack frame they're pointing at goes away? How C++. A modern flashy Python that lets references just die on you if it wants to?

reinventing toposort posted:

(in a section on how he determines what module-level 'static' code runs first, other than import order, which is what Python uses)
Let priority be a floating-point number. There are other uses for priority, and usually priority "zero" will be standard/default. Higher priorities execute sooner. For initialization priorities, "0" is application main logic. Priority "10" is the standard for application setup. Class initialization has priority "20" by default.
Or, here's an idea! You let each module enumerate its dependencies... say, with a keyword like "import" and have it execute the static code of each dependency at import-time. That ignores all the accident-prone numeric fuckery you're subjecting yourself to.

By the way, he seems to think not all code on the module-level in Python is designed to be executed. (as opposed to stuff related to structure, like Java class declarations) He's wrong.

reinventing plugins posted:

Part of the general principle of annotation is distant processes, even processes that are innocent of each other, working together to assemble a single object. In terms of modules, this means that multiple files should be able to belong to the same module - for example, multiple FlareCode files may contain data for a single class - or a single global object. The idea would be that, on reading in a FlareCode file, the FlareCode file may specify subelements to be added to a target element - where that target may be a class, a module, et cetera. If that target element does not yet exist, it is created - this prevents self-assembly from being dependent on order of execution. (Alternatively, initialization priorities can be used.)
This would have been a non-problem had you picked an object model that didn't stink like rear end to reason about. (compare Ruby)

reinventing English posted:

Extraction of FlareSpeak from FlareCode, or FlareCode from FlareSpeak, is metaphorically similar to layered feature extraction in a sensory modality.
What?

reinventing garbage collection posted:

Two-way references allow for safe dynamic deletion and safe stack allocation.

Recommendations:
Use a two-way linked list to track the references. The ->next and ->prev members should probably be FlareReference->next and FlareReference->prev, of type FlareReference*, meaning that the linked-list links are located on the References themselves. I'm not sure how to handle the endpoints, but one example would be to have a reference to the first FlareReference in the chain, and then treat that as a special case, detectable because FlareReference->prev == 0.
Given the existence of soft references, if an item is supposed to be garbage-collected, then track the number of hard references in a counter, rather than iterating through the list to count the hard references each time. This may someday be obviated by a more sophisticated mark-and-sweep algorithm. If a hard reference exists to a subelement of an element with GC scope, then that may prevent the superelement from being deleted, and vice versa; thus, hard references to subelements with "subelement of GC" scope should increment the reference counter of the enclosing element or super-superelement with GC scope.
[it goes on for paragraphs]
Requirement: all garbage must be cyclic garbage.

(cyclic garbage is the edgecase that breaks any reasonably-simple GC. In Flare it seems to be a required default. I have no idea why or what the requirement pertains to because the language in the section it occurs to is so unclear.)

reinventing operator overloading posted:

In the expression `a + b`, the actual addition is delegated to one of the operands. Which operand the task is delegated to is determined by the priority of the class/type/metadata for that operand. Thus, "priority" is a subelement of the Metadata class, which is translated into a floating-point or fixed-point number in the C++ representation of the uberdata. The higher-priority operand is asked to handle the operator first; if that fails, the interpreter attempts to delegate the operation to the other operand; if that fails, the expression fails. If the two priorities are equal, the left-hand operand is asked first. Except in cases of operator overloading, the actual operation will presumably be carried out by a C++ function which implements that operator. The C++ uberdata for any type/class/metadata will contain a pointer to a structure containing the function pointers for the language-standard implementations of that operation on that type, or to the C++ functions which implement the delegation of operator overloading to Flare methods. The right-hand addition operator may differ from that of the left-hand addition operator (addition is not always commutative - it is not commutative for strings, for example). In other words, `2 + foo` may call a different function on `foo` than `foo + 2`.
What if I want a right-associative operator? You can't define a right-associative operator in this syntax.

This doesn't solve the problem it was probably meant to solve. In Python, if you have hypothetical types B (user-written) and A (built-in) then A isn't aware of B and can't necessarily adapt its methods to work with B unless B provides an interface A knows how to work with. Under Yudkowsky's rules B can say "I don't implement your interface, but I'm *amazing* and all my versions of your operations (if they're provided by operators) take precedence over your versions." But if I introduce user type C that's also meant to interoperate with A, and try to make it interoperate with B, there's still no connection. Instead of requiring language users to formalize *how* their type is suitable for an operation (which is what requiring an interface does), he's just letting them claim "don't worry, I totally have a great version of this op" based on guaranteed-incomplete knowledge of what other types, user and builtin, implement that operation.

reinventing lists posted:

2.13: List subelement type

Level Zero

An element should be able to have multiple subelements of a particular type, such as <eye>; a <head> element with three eyes might have three distinct <eye> elements. An element may also declare that all elements with list type are bound together into a single ordered list - in effect, everything except planes, or any other cases statically declared, would be bound together in a single ordered list.
Help, I'm Eliezer Yudkowsky and I've just invented a distinction between objects and objects-in-lists that only my brilliant language resolves or cares about!

reinventing properties posted:

An interception - in canonical form, an element in %intercept aka <p-intercept> - is a subelement (or, in this case, sub-subelement) that changes the behavior of the enclosing element, the location or superlocation. Rechecking for an interception every time any element is accessed would tend to bring the interpreter to its knees, and for that matter, cause an infinite recursion error if done the obvious way (i.e., if the FLHasSubElement("redirect_lookup") call also automatically tries checking for the <redirect_lookup> interception...).
I should not be able to say "x = 4," perform some computations, and then accidentally assign x to a value which turns my scope into a Ferrari. Help.

reinventing doing the research posted:

Functional programming languages distinguish between "functions" and "procedures"; under this usage (not standard in this document, BTW) a "function" returns an output based on its inputs and has no other effects; a "procedure", used by evil procedural programming languages, makes changes to global state. While I have never tried to write a program in a truly functional programming language (Haskell, for example), I can see that controlling side effects would be a powerful optional way of enforcing cleaner programs.

While I have never tried ramming a banana up my rear end I can see that ramming a banana up my rear end would be a powerful optional way of jacking off.

reinventing no they don't you troglodyte please learn Haskell before complaining about it posted:

There are some common design patterns that involve temporary breaks in an otherwise controlled set of side effects; for example, caching, in which a pure function has a strictly internal global variable that matches previously computed results to previously encountered arguments. (Output is still a strict function of input; only the amount of time necessary to compute the result is changed as a result of cache state.) Profilers need to be able to change certain pieces of local state, for example, a counter showing the number of times a function has been called, even if the function is declared const. Stack-based modifications temporarily change the global state, but then restore it.

I know nothing about lazy evaluation or the ST monad (which was specified as early as 1993, for the love of God!). Please stab me with a greasy spork.

reinventing wow, it's almost as if your goal is to track an additional layer of metadata about the values in your program to determine if they're suitable - allowed, rather -- to perform certain operations one might otherwise unquestioningly assume they support posted:

Side effect control means that a const instance method must not change the object, and that a function called with a const argument must not change the argument. Using static typing to control constness, however, means that a function called with a const argument must not only refuse to change the argument, but if returning a reference derived from that argument, must return a result that the calling function needs to treat as const, whether or not the calling function has promised not to modify that argument. In C++, this is necessary; supposing that the calling function had promised to treat an object as const, passing it to a const function and getting back a non-const reference would create the potential for violation of side effect control. But it violates innocence; whether the side-effect-controlled function treats the argument as const can have an effect on whether the calling function must treat as const references derived from that argument.

The ideal solution consists of derivation tracking. To prevent violation of innocence, the calling function needs to know which argument the return value is derived from, thus enabling the calling function to track constness locally - maintaining, in fact, the same idiom that would be used if no other function calls were involved. When `foo.bar` is written, without operator overloading being involved, the function in which that statement appears knows that if `foo` is const, it implies that `foo.bar` is const. Whether `foo.bar`, as an expression, has any effect on `foo` (it shouldn't) is an entirely separate issue from knowing that `foo.bar` is derived from `foo` and will have the same constness or non-constness as `foo`.

The first apparent idiom for dynamic tracking of side effect control, as opposed to the statically typed tracking seen in C++, is for constness to be dynamically represented as a %const.something annotation which is contagious across assignments, lookups, dereferences, and expression invocations. That is, if `foo` is const (has an annotation <p-const><something/></p-const>), then in `bar = &foo`, `bar = foo.&baz`, `bar = *foo`, and `bar = foo()` must all cause variable `bar` to inherit constness. Constness is not just a binary value, in this formulation; the plane %const actually tracks derivation through the use of specific tags. foo(const var a1, const var a2) will distinguish between constness derived from a1 and constness derived from a2.

Get a type system Yud

Germstore
Oct 17, 2012

A Serious Candidate For a Serious Time
Christ, I couldn't get through half of that. It feel like what Clojure would be if Rich Hickey was an idiot.

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Germstore posted:

Christ, I couldn't get through half of that. It feel like what Clojure would be if Rich Hickey was an idiot.

That's not a bad comparison. Clojure has a lot of neat usability features you can implement in terms of the core language, and Flare has a lot of allegedly novel features that could be implemented in about ten lines of Python. Clojure's a small language that feels big and Flare is a smallminded language that feels bigheaded.

By the way, if I missed explaining the Flare object model in the above, short version (by memory) is that everything's an object, variables are like objects, some objects are Values (which are like values in any other language) but others are Expressions (which are often Operands), which are a little like lambdas and a little like lazy values depending on what mood the type system is in, although there's no static typing, and to be honest they're actually syntax trees -- lists and other data structures are made of things that are similar to objects, or maybe they're just XML -- but objects are XML, and so is code -- and a name's metadata, which has fields, is separate from the name's associated value, which has fields, and the value of a thing (and sometimes its metadata) varies based on scoping rules specified both in the object, variable, or data structure itself (in which case it's called an interceptor) and on builtin rules assigned Java/C++like names like "const" and "semistatic" -- and those rules often even (i.e. in const) represent reversible transformations of the data that seem to last as long as the object is in a stack frame below the one where those modifiers apply, (although per another page Flare does not run on a stack but a tree, which is like a stack) unless code where the object was bound to another name without those modifiers is allowed to run (via closures or otherwise), this is not a type system for it occurs at runtime, dynamically, wave of the future, stop making fun of me, it's too confusing for you, give us your money.

Krotera fucked around with this message at 19:21 on Jan 28, 2015

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
I am amazed that someone would seriously succumb to "Not Invented Here" paranoia due to freakin' Python. Oh no! I wanted to try my hand at some hardcore AI stuff and all the functionality I needed wasn't there at all. I had to write "import sklearn". That's a whole 14 characters! How am I supposed to implement an omnipotent benevolent AI using this thing?

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET

Triple Elation posted:

I am amazed that someone would seriously succumb to "Not Invented Here" paranoia due to freakin' Python. Oh no! I wanted to try my hand at some hardcore AI stuff and all the functionality I needed wasn't there at all. I had to write "import sklearn". That's a whole 14 characters! How am I supposed to implement an omnipotent benevolent AI using this thing?

IIRC his rationale went like this:
- "annotative" programming (the object model I described in the above post) is too important not to have
- Python is bad at expressing self-modifying code

Speaking of the second point.

code:
def yoneda(action):
  x = [1, 2, 3]
  return map(action, x)

>>> yoneda(lambda x: x + 1)
[2, 3, 4]

>>> yoneda(lambda x: x * 2)
[2, 4, 6]
Whoops! I think I dropped something.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Pavlov posted:

Right, the loving dust specks. That might be a good thing to throw at my guy.
Have you considered just reminding the guy of the sheer volume of Yudkowsky's certifiable bullshit? The cryonics, the belief in omniscient/omnipotent AI, the eternally unfinished Harry Potter magnum fanficus (despite the loving personal cabin reserved for its completion), the worthless "research institute" and its attendant "charity", "Timeless Decision Theory" and the obvious credence that Yud gives to Roko's Basilisk, the abuse of Knuth's notation, the terminal misapplication of the Bayes Theorem, the... hell, there's too much garbage to list!

Applewhite
Aug 16, 2014

by vyelkin
Nap Ghost
I'm beginning to get the impression that this Yud character is something of a buffoon.

A Wizard of Goatse
Dec 14, 2014

Dr Cheeto posted:

Does he ever even update his priors? Like, for all the sloppy blowjobs he gives Baye's he seems to be pretty bad at taking advantage of its greatest strength.

Has he actually stated a disdain for experiments and such (you know, the meat and potatoes of empirical science), or is it just implied by his reliance on pulling numbers out of his rear end and his allergy to anything approximating work?

He's not like out-and-out hostile to the scientific method, it's more that whenever the science doesn't support his fantasies he takes the lofty position of the man from the year 40,000 softly chuckling to himself about how those savages used to believe the world worked. See also: computer simulation that effectively creates another, larger universe in order to get extra processing power; cryonics; how the human brain works; how AIs might work; Moore's Law as a more absolute law of physics than the mere properties of electrons.

He doesn't feel the need to actually support his extremely specific and wrong claims because history will inevitably vindicate him and prove everyone else wrong without further effort on his part; being wrong and an idiot to the 21st century is no biggie because if folks from the 21st century are so smart why aren't they god robots

A Wizard of Goatse fucked around with this message at 21:13 on Jan 28, 2015

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

A Wizard of Goatse posted:

He's not like out-and-out hostile to the scientific method, it's more that whenever the science doesn't support his fantasies he takes the lofty position of the man from the year 40,000 softly chuckling to himself about how those savages used to believe the world worked. See also: computer simulation that effectively creates another, larger universe in order to get extra processing power; cryonics; how the human brain works; how AIs might work; Moore's Law as a more absolute law of physics than the mere properties of electrons.
You know what, that might actually be my favorite thing about Yudkowsky. It's like an idiot who just knows that he can make a perpetual motion machine, that other people accept the idea of :airquote: conservation of energy :airquote: only because they're doing it wrong, unlike him.

SolTerrasa
Sep 2, 2011

That was amazing, thank you.

One more problem is that Yud's wonderful innovative system of Planes (which he has to contrive awful syntax for, ugh) is literally just the Protocol Buffer's Message Set, which Google open-sourced a while ago. And this is (one of, oh god so many reasons) why you don't base your language on a serialization method: someone comes up with a better one.

Another, for those of you who are technical and following along with this Flare thing: can you *imagine* versioning, any versioning at all, language or class or individual object versioning, in a system based exclusively on XML serialization? Protobuf has a well-specified set of allowable forward changes to keep stored old protobufs wire-compatible with new ones. XML... Does not. And *everything* is XML. The Flare code is XML, the data is XML, the interpreter state is XML, the "domules" are XML, and it's all cripplingly unversioned. Here is Yud, in the hilarious future where he had a work ethic and actually wrote this stupid thing.

http://stackoverflow.com/questions/2014237/what-are-the-best-practices-for-versioning-xml-schemas

I love how he has got the t-shirts planned out already. One of them says "It's Really Written In Flare". Can you guess why? It's because this language is going to catch on, everywhere, and everyone will choose to abandon their language wars and unite together under the banner of Flare. Linus Torvalds, who to this day has resisted the scourge of email with markup in it, will assent to maintenance of the kernel in Flare, instead, because it's just so much better. And Flare is so expressive that all existing code in every language will be losslessly converted to and from Flare whenever a Real Programmer has to deal with The Unenlightened. So the code might be *committed* in C, but it was *written* in Flare.

And he hasn't thought about versioning. In a language built on a serialization method.

SolTerrasa fucked around with this message at 21:59 on Jan 28, 2015

Peel
Dec 3, 2007

If we're talking about strange Internet people and programmers, here is a link to Richard Kulisz.

quote:

So, Smalltalk, LISP and Self ==> OO + real + objects + matter. Java, C++ ==> dead crap + fake + insubstantial + ectoplasm. Also, OO <=> Good, and Java <=>; Bad. The reason Java and C++ prevailed and OO lost is because most people are retarded brain-dameged idiots incapable of grasping OO. Just like they're incapable of grasping Goodness is the reason why we have capitalism and coal and disease and poverty and wars and death. Bad to the retards is "Good Enough". This is the Worse Is Better crowd.

He doesn't like physics. Or the rest of academia. Or feminism. Or Yud. Or you. Likes D&D though.

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1
I am deeply confused by this whole... Thing. I am no Big Shot Developer, but from my limited experience, when you implement a feature it is roughly a process of three stages: 1. you have to understand what you want to do, 2. you have to understand how and why what you want to do is possible/feasible to implement in practice, 3. you get to the gory details of implementation (of course you need to plan that out too but you see what I am getting at).

Yudkowsky's challenges lie with phases 1 and 2. He wants to create a "Friendly AI" but he doesn't yet have a rigorous definition for what that's supposed to mean, and the foggy concept he does have, he has no idea how or why you could put together an implementation to actualize it. He has shreds of ideas, fragments of intuition. This is what's called a research problem. Basically at this stage you are reading up on the literature, looking for known solutions on Google and StackOverflow, trying to grasp for relevant knowledge you do have that might apply by analogy. You're squeezing your brain for brain juice, you're opening up your mind to let the inspiration in.

As a rule, programming languages are for phase 3. They do not inspire you in this sense. If you already know more or less what you want to do, a programming language will be a good tool to implement it with or a bad tool, but generally speaking it won't give you this Big Idea you're looking for. Sure, some programming languages have a Big Idea in them, and sure, for some problems it must have been the breakthrough- the idea the problem was waiting for. But you don't GET an idea like that by virtue of designing a programming language. That's cargo cult science. You're going to have to, well, think of the idea yourself. And even if you do, the idea is what will matter, and you would be able to implement it in any language you choose, it's just a matter of convenience. What? Self-modifying code? Assembly Language has that already - in its most beautiful, raw, terrible form. Does the Tao of Assembly provide one with the epiphany of how to make sure the AI does not turn everyone into paperclips?

Qwertycoatl
Dec 31, 2008

Triple Elation posted:

I am deeply confused by this whole... Thing. I am no Big Shot Developer, but from my limited experience, when you implement a feature it is roughly a process of three stages: 1. you have to understand what you want to do, 2. you have to understand how and why what you want to do is possible/feasible to implement in practice, 3. you get to the gory details of implementation (of course you need to plan that out too but you see what I am getting at).

I think the thing is, he has no idea how to even start writing an AI, but designing a programming language is something you can write pages and pages about, creating the illusion of progress.

It's a gigantic waste of time that could never amount to anything, but it probably feels much better to do that than to sit around watching anime thinking about AI design for months with nothing at all to show for it.

pentyne
Nov 7, 2012
HPMOR got updated, and this time Yud is promising an update on 2/13. Can't wait for the inevitable "Sorry, got distracted with world saving AI research, need someone with a beach house cottage to foster me for free while I finish it FOR REAL this time"

Pavlov
Oct 21, 2012

I've long been fascinated with how the alt-right develops elaborate and obscure dog whistles to try to communicate their meaning without having to say it out loud
Stepan Andreyevich Bandera being the most prominent example of that

Sham bam bamina! posted:

Have you considered just reminding the guy of the sheer volume of Yudkowsky's certifiable bullshit? The cryonics, the belief in omniscient/omnipotent AI, the eternally unfinished Harry Potter magnum fanficus (despite the loving personal cabin reserved for its completion), the worthless "research institute" and its attendant "charity", "Timeless Decision Theory" and the obvious credence that Yud gives to Roko's Basilisk, the abuse of Knuth's notation, the terminal misapplication of the Bayes Theorem, the... hell, there's too much garbage to list!

I talked to the dude for like 5 minutes. I don't want to make a big project out of this. He just asked for a specific example from Yud's writing and thought I'd find him a particularly good one and hope he can figure out the rest.

A Wizard of Goatse
Dec 14, 2014

I'm glad that we've finally gotten to the real juicy dirt on this guy, namely that he programs badly (?)

Telarra
Oct 9, 2012

It'd be less notable if he hadn't dedicated his life to the holy grail of programming.

Sham bam bamina!
Nov 6, 2012

ƨtupid cat

Pavlov posted:

I talked to the dude for like 5 minutes. I don't want to make a big project out of this. He just asked for a specific example from Yud's writing and thought I'd find him a particularly good one and hope he can figure out the rest.
He starts with conclusions that he wants ("The Singularity will produce a god-level AI, and donating to my incredibly productive research institute will save the world from an evil one," "There has to be a way for me to live forever, and cryonics is basically my only option today.") and chucks away absolute facts like "bigger numbers don't make claims more probable" and "these claims about AI crash into hard physical limits" and "freezing a dead brain makes it even worse" whenever they get in his way as he works backwards through blithe abuses of mathematics that favor his conclusions, which is practically the antithesis of rational thought. He has literally stated that the physical laws that contradict his ideas just have to be mistaken in some way, because his ideas are clearly right a priori.

Sham bam bamina! fucked around with this message at 04:06 on Jan 29, 2015

Krotera
Jun 16, 2013

I AM INTO MATHEMATICAL CALCULATIONS AND MANY METHODS USED IN THE STOCK MARKET
I'm writing an Eliza bot! It's not in Flare! I'm such a rebel.

Adbot
ADBOT LOVES YOU

Triple Elation
Feb 24, 2012

1 + 2 + 4 + 8 + ... = -1

A Wizard of Goatse posted:

I'm glad that we've finally gotten to the real juicy dirt on this guy, namely that he programs badly (?)

1. It's not that he programs badly; it's that he has taken it upon himself to give the world another programming language. That's a significant undertaking.
2. It's not that he has taken it upon himself to give the world another programming language; it's that he came into it with all the smug eagerness to market and toot the horn that you might possibly expect of an expert who has struggled with a problem for a long time and has finally had a Really Good Idea and put together a working Proof of Concept. Except he's not an expert, his idea is apparently not that good, and it's not even a significant step forward for the problem he's tackling. If he had approached this whole thing with a little more humility, we would be having a completely different discussion.

Yudkowsky, in a better universe posted:

Hey guys, I'm not an expert on programming languages, but I thought it would be a really nice challenge for me to try and put one together. My area of work is artificial intelligence so I want something that will be really natural to work with in that domain, something like Prolog maybe. We can make a community project out of it. Here's the git repo, here's a discussion thread, let me know what you think about the design. This is an amateur effort so don't take it too seriously, but I really hope it will become really cool and useful eventually!

  • Locked thread