|
You can tell this is 2000-era paper from the XML obsession.quote:At present, Chrystalyn is simply an idea. As with Elisson and Flare, it will probably take at least a month of thought to translate the idea into a design, then another month if I need to publish it on the Web.
|
# ? Jan 27, 2015 20:44 |
|
|
# ? May 4, 2024 13:59 |
|
The problem with taking a 'Yud is irredeemably crazy' tack is that he isn't, he's an SF blogger with a grandiose self-image. Point him at Yud's lack of concrete results or involvement in actual AI research, the assumptions his futurology and social speculation relies on, and the old hat his philosophy is so he has a more realistic view. If he's getting into philosophy maybe point him at Hume so he has a point of reference for talking to traditional philosophers. There's no reason someone's intellectual development couldn't go via Less Wrong so long as they don't buy into the cult and stagnate. SolTerrasa posted:the Singularity Institute (which folded amid embezzlement controversy, so he founded MIRI). You can't just drop that and then not expand on it.
|
# ? Jan 27, 2015 22:19 |
|
Peel posted:The problem with taking a 'Yud is irredeemably crazy' tack is that he isn't, he's an SF blogger with a grandiose self-image. Point him at Yud's lack of concrete results or involvement in actual AI research, the assumptions his futurology and social speculation relies on, and the old hat his philosophy is so he has a more realistic view. If he's getting into philosophy maybe point him at Hume so he has a point of reference for talking to traditional philosophers. I do actually think that Yud is irredeemably nuts (not crazy, just fifteen degrees shifted from reality), but I don't think you're wrong. My intellectual development went via lesswrong, after all. But it'll be much easier to convince Catbug's philosopher friend that Yud is a hack than that he's crazy. Try this: Yudkowsky has had grandiose ideas since he was 17, and in those two decades he has implemented zero of them. He is, at best, a popularizer of rationalist principles, though he conflates them with his own singularity-seeking views to an extent that should be alarming. The subset of his work which is well-done is not original; the subset of his work which is original is panned unanimously among recognized experts. quote:You can't just drop that and then not expand on it. I first read it here: http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/ A GiveWell staffer says don't donate to SI, citing a theft of 20% of their operating budget. Less than a year later, SI sells its only asset (the brand of the Singularity Summit) to the now existing Singularity University. Less than a year after that, MIRI forms with many of the same people (minus the thief).
|
# ? Jan 27, 2015 22:54 |
|
SolTerrasa posted:I do actually think that Yud is irredeemably nuts (not crazy, just fifteen degrees shifted from reality), but I don't think you're wrong. My intellectual development went via lesswrong, after all. But it'll be much easier to convince Catbug's philosopher friend that Yud is a hack than that he's crazy. Try this: Yudkowsky has had grandiose ideas since he was 17, and in those two decades he has implemented zero of them. He is, at best, a popularizer of rationalist principles, though he conflates them with his own singularity-seeking views to an extent that should be alarming. The subset of his work which is well-done is not original; the subset of his work which is original is panned unanimously among recognized experts. See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him.
|
# ? Jan 28, 2015 00:01 |
|
Pavlov posted:See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him. That is something you won't find outside of technical matters. Yudkowsky is internally consistent at any given time. It's one of his admirable qualities. He's widely variable over time, but I actually don't consider that a failure. One thing you will find is that his fear of death overrides his reason. He believes in a 6% chance that a body which is cryonically frozen today will be viable in the future. You will not find a biologist (who isn't profiting from cryonics) who will come within an order of magnitude of that confidence (most will tell you 0.0%), but Yudkowsky regards it as such an obvious choice that it would be irrational NOT to freeze your dead body. He does not update that belief on evidence, like the time that an Alcor employee wrote a book about systemic mistreatment of the corpses in their care. But I mean, really, that's not even that bad. I'm scared of death too, and if I were a little less sane and a little more self-important, maybe I'd believe what he does. Being a rationalist isn't that bad. Being a Yudkowsky cultist definitely is.
|
# ? Jan 28, 2015 00:23 |
|
ALL-PRO SEXMAN posted:What if the moon was made of ribs? Then there would be approximately 2.784x10^26 calories of meat orbiting the Earth.
|
# ? Jan 28, 2015 01:27 |
|
that thing I posted earlier posted:A merely transhuman AI (as opposed to a Power) might have trouble renting a nanotechnology lab without attracting attention. So, if the Singularity Institute has the money, we should have a nanotechnology lab in our basement. The remarkable thing about nanotechnology, circa 2000, is how cheap the basic equipment is. Having a nano lab is likely to be considerably easier than having our own supercomputer. Circa 2000, a pocket nanotech lab would probably consist of a scanning tunnelling microscope, a DNA sequencer, and a protein synthesis machine. I love futurists.
|
# ? Jan 28, 2015 03:19 |
|
Pavlov posted:See, I know there's a lot of things I could say about Yudkowsky himself, but I'd rather try to convince this guy without having to make a personal attack. I know I've seen people posting stuff where Yud manages to explain an idea (poorly) and then completely contradict himself by the time he's finished. If this guy fancies himself a Rationalist, I think that's the kind of thing that would help convince him. Guy isn't a hypocrite or a huckster or an idiot, he's just a third-rate theologian whose proof of the existence of computergod amounts to that we had abacuses in 1900 and now we have iPhones; therefore in another hundred years or less everything we know about physics or observed reality will be so much tribal superstition and we'll have pulled off a Civilization tech win where all your dreams will, naturally, come true It's not inconsistent, it's just unsupported and insupportable because it relies on a sneering equivalence between any observations available to modern man and the witch doctor blaming evil spirits for making the harvest fail, and an interpretation of Bayes theory very close to that if you can phrase an argument in the form of a sufficiently large made-up number then it must be true.
|
# ? Jan 28, 2015 03:48 |
|
SolTerrasa posted:I love futurists. Where is the $5 million e-beam writer? Where are all the things to do litho? Is all of the God-AI's nanotech going to be wetware?
|
# ? Jan 28, 2015 03:49 |
|
bartlebyshop posted:Where is the $5 million e-beam writer? Where are all the things to do litho? Is all of the God-AI's nanotech going to be wetware? Can you imagine if he'd heard of 3d printers in 2000? More seriously though I love all these he wrote to obscure how terrible he is at working. Seriously, if you haven't yet, read his technical timeline. This is a man who has seriously debated whether he needs a devoted memeticist for his new programming language, or whether simply creating it will be enough to ensure it gets more popular than python and Java (no poo poo). He has deeply considered whether v1 of the language will be spectacularly brilliant enough to mean that a port of the Linux Kernel is inevitable, or if it will have to wait until v2. And then he never actually got around to MAKING it. Here is the project page, fyi: http://flarelang.sourceforge.net
|
# ? Jan 28, 2015 07:31 |
|
SolTerrasa posted:Can you imagine if he'd heard of 3d printers in 2000? First, Sourceforge is instant lol. Second, it's probably just like the AI chatbox thing. Once he got past the first three or four of his acolytes who loved it and hit someone with domain expertise who told him it was stupid, he gave up immediately.
|
# ? Jan 28, 2015 07:34 |
|
SolTerrasa posted:diamondoid drextech (SolTerrasa note: those are meaningless sounds) Correction: they're only meaningless if you're an actual scientist. But if your sole reference point for science is fiction, they mean, "I've read Neal Stephenson's The Diamond Age" and "I've read Eric Drexler's breatheless predictions about nanotech". While we're on the subject, Drexler and all the other typical nanotech evangelists are wrong about practically everything.
|
# ? Jan 28, 2015 08:50 |
|
Re: Yudkowsky failing as a rationalist - I still think that one of the most immediately, obviously, atrocious examples of this is Torture vs. Dust Specks. I remember reading it as the exact point where I thought "yeah, better take everything else I read around here with a grain of salt". To top it off, Yudkowsky does at least profess to have a proper concept of "huh, at this point my conclusions are so absurd, it's easier for me to believe I just failed to account for something" -- he just fails to act by it. I can't help but notice that this is a recurring trend. Yudkowsky will outline a sociological / epistemological / what-have-you issue and provide a nice catchy explanation for it, but will gladly and blindly walk into the very same traps that he outlined when his own intuitions lead him to them. When not indulging in them, Less Wrong is actually a neat source for Things Not To Do (My favorite off the top of my head is When None Dare Urge Restraint). Triple Elation fucked around with this message at 17:17 on Jan 28, 2015 |
# ? Jan 28, 2015 09:04 |
|
SolTerrasa posted:I really cannot express how fantastic this link is without posting the whole drat thing, but here are some choice quotes. quote:"The Plan to Singularity" is a concrete visualization of the technologies, efforts, resources, and actions required to reach the Singularity. Its purpose is to assist in the navigation of the possible futures, to solidify our mental speculations into positive goals, to explain how the Singularity can be reached, and to propose the creation of an institution for doing so. I know it's low-hanging fruit, but drat. Edit: "But that pretense of Vulcan logic, where you think you're just going to compute everything correctly once you've got one or two abstract insights—that doesn't work in real life either." -- Eliezer Yudkowsky 90s Cringe Rock fucked around with this message at 11:45 on Jan 28, 2015 |
# ? Jan 28, 2015 11:42 |
|
chrisoya posted:I'm Eliezer S. Yudkowsky: author, dreamweaver, visionary, plus programmer. You are about to enter the world of my rationality; you are now entering my Singularity. This entire thread I've been trying to pin down who Yud reminds me of and this is exactly it, THANK YOU.
|
# ? Jan 28, 2015 11:51 |
|
The Time Dissolver posted:This entire thread I've been trying to pin down who Yud reminds me of and this is exactly it, THANK YOU. Help a poor dumb goon out, I don't understand the reference
|
# ? Jan 28, 2015 12:32 |
|
http://www.videobash.com/video_show/garth-marenghi-s-darkplace-intro-1047472
|
# ? Jan 28, 2015 12:39 |
|
Right, the loving dust specks. That might be a good thing to throw at my guy. I might be missing something with the When None Dare Urge Restraint though. Looks to me like he's just saying "People were scared after 9/11, and this lead to Iraq, which was dumb." Except he does it with an extra helping of "But I totally called it though ."
|
# ? Jan 28, 2015 16:06 |
|
Does he ever even update his priors? Like, for all the sloppy blowjobs he gives Baye's he seems to be pretty bad at taking advantage of its greatest strength. Has he actually stated a disdain for experiments and such (you know, the meat and potatoes of empirical science), or is it just implied by his reliance on pulling numbers out of his rear end and his allergy to anything approximating work?
|
# ? Jan 28, 2015 16:20 |
|
Pavlov posted:I might be missing something with the When None Dare Urge Restraint though. Looks to me like he's just saying "People were scared after 9/11, and this lead to Iraq, which was dumb." Except he does it with an extra helping of "But I totally called it though ." I don't care much about the specific example he gives, I'm talking about the general notion: Yudkowsky posted:[..] just as the vast majority of all complex statements are untrue, the vast majority of negative things you can say about anyone, even the worst person in the world, are untrue. [..] It is just too dangerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things. [..] Once restraint becomes unspeakable, no matter where the discourse starts out, the level of fury and folly can only rise with time. cf. Orwell posted:Almost nobody seems to feel that an opponent deserves a fair hearing or that the objective truth matters as long as you can score a neat debating point. [..] The atmosphere of hatred in which controversy is conducted blinds people to considerations of this kind. To admit that an opponent might be both honest and intelligent is felt to be intolerable. It is more immediately satisfying to shout that he is a fool or a scoundrel, or both, than to find out what he is really like.
|
# ? Jan 28, 2015 17:43 |
|
Oh yeah, the Flarelang stuff. I already did a long ranty review earlier in the topic, but basically it's a hyperverbose Python that confuses a lot of pretty fundamental distinctions (i.e. between variables and their values) and operates on XML objects instead of hashtables. It doesn't really include any features that can't in some way be extrapolated from Python, XML, or the first ten minutes of a trendy language he doesn't know that well. He's kind of verbose and confusing but from reading his feature list I think most people would take away: - Yudkowsky knows a little bit about XML. - But Yudkowsky doesn't know Python very well. - And Yudkowsky definitely doesn't know any languages other than Python or XML very well. (He talks a lot about C++, but I'm not convinced he knows it.) There's a few neat features -- a text representation of the code is generated from a syntax tree, instead of the other way around. There are also some really stupid ones, like "voice comments" -- instead of typing them, attaching sound-file explanations of what parts of code mean. Overall it's not wildly horrible for an early go at a language design but it's not good. A representative helping of his feature document: reinventing the lambda posted:Quoting an expression, as in `a + b`, results in a miniature Flare codelet - that is, a fragment of FlareCode that can be passed around. Conversion to a Value results in the expression being evaluated; the expression is reevaluated each time a Value is needed. When treated as an Operand, the expression itself is passed around. The transparency idiom is similar to that for references. reinventing operator sections posted:Level Four reinventing the callback posted:As described in Causality (yes, you need to read the paper), there are essentially two kinds of causality; action-based and validity-based. Causality occurs when element/object B makes a computation that uses data from element/object A, and when the application logic requires the maintenance of a correspondence (mapping) between element A and element B. Validity-based causality is when a change to A causes B to be marked as having invalid state, in which case B needs to recompute before it can be reused. Action-based causality involves maintaining the correspondence in real time; when A is changed, B is notified in time for the corresponding change to be carried out on B. reinventing the goto posted:Functions, inside a method, that can be called without adding a new method call to the stack - possibly even called within the current block. A section of FlareCode, contained inside a semistatic local variable, which is interpreted directly when called, rather than a new method call being added to the stack. ("Semistatic local variable": Having a default value, but locally overridable by assignment - parenting rules.) Also "semistatic local" is stupid because locals are already capable of having default values -- they're called "whatever you set them to first." I don't understand what being static has to do with this. reinventing closure posted:The ability to pass a subfunction or a quoted expression outside its originating method call, and have any invocation of that subfunction or quoted expression refer to local variables contained in the originating method call. reinventing toposort posted:(in a section on how he determines what module-level 'static' code runs first, other than import order, which is what Python uses) By the way, he seems to think not all code on the module-level in Python is designed to be executed. (as opposed to stuff related to structure, like Java class declarations) He's wrong. reinventing plugins posted:Part of the general principle of annotation is distant processes, even processes that are innocent of each other, working together to assemble a single object. In terms of modules, this means that multiple files should be able to belong to the same module - for example, multiple FlareCode files may contain data for a single class - or a single global object. The idea would be that, on reading in a FlareCode file, the FlareCode file may specify subelements to be added to a target element - where that target may be a class, a module, et cetera. If that target element does not yet exist, it is created - this prevents self-assembly from being dependent on order of execution. (Alternatively, initialization priorities can be used.) reinventing English posted:Extraction of FlareSpeak from FlareCode, or FlareCode from FlareSpeak, is metaphorically similar to layered feature extraction in a sensory modality. reinventing garbage collection posted:Two-way references allow for safe dynamic deletion and safe stack allocation. (cyclic garbage is the edgecase that breaks any reasonably-simple GC. In Flare it seems to be a required default. I have no idea why or what the requirement pertains to because the language in the section it occurs to is so unclear.) reinventing operator overloading posted:In the expression `a + b`, the actual addition is delegated to one of the operands. Which operand the task is delegated to is determined by the priority of the class/type/metadata for that operand. Thus, "priority" is a subelement of the Metadata class, which is translated into a floating-point or fixed-point number in the C++ representation of the uberdata. The higher-priority operand is asked to handle the operator first; if that fails, the interpreter attempts to delegate the operation to the other operand; if that fails, the expression fails. If the two priorities are equal, the left-hand operand is asked first. Except in cases of operator overloading, the actual operation will presumably be carried out by a C++ function which implements that operator. The C++ uberdata for any type/class/metadata will contain a pointer to a structure containing the function pointers for the language-standard implementations of that operation on that type, or to the C++ functions which implement the delegation of operator overloading to Flare methods. The right-hand addition operator may differ from that of the left-hand addition operator (addition is not always commutative - it is not commutative for strings, for example). In other words, `2 + foo` may call a different function on `foo` than `foo + 2`. This doesn't solve the problem it was probably meant to solve. In Python, if you have hypothetical types B (user-written) and A (built-in) then A isn't aware of B and can't necessarily adapt its methods to work with B unless B provides an interface A knows how to work with. Under Yudkowsky's rules B can say "I don't implement your interface, but I'm *amazing* and all my versions of your operations (if they're provided by operators) take precedence over your versions." But if I introduce user type C that's also meant to interoperate with A, and try to make it interoperate with B, there's still no connection. Instead of requiring language users to formalize *how* their type is suitable for an operation (which is what requiring an interface does), he's just letting them claim "don't worry, I totally have a great version of this op" based on guaranteed-incomplete knowledge of what other types, user and builtin, implement that operation. reinventing lists posted:2.13: List subelement type reinventing properties posted:An interception - in canonical form, an element in %intercept aka <p-intercept> - is a subelement (or, in this case, sub-subelement) that changes the behavior of the enclosing element, the location or superlocation. Rechecking for an interception every time any element is accessed would tend to bring the interpreter to its knees, and for that matter, cause an infinite recursion error if done the obvious way (i.e., if the FLHasSubElement("redirect_lookup") call also automatically tries checking for the <redirect_lookup> interception...). reinventing doing the research posted:Functional programming languages distinguish between "functions" and "procedures"; under this usage (not standard in this document, BTW) a "function" returns an output based on its inputs and has no other effects; a "procedure", used by evil procedural programming languages, makes changes to global state. While I have never tried to write a program in a truly functional programming language (Haskell, for example), I can see that controlling side effects would be a powerful optional way of enforcing cleaner programs. While I have never tried ramming a banana up my rear end I can see that ramming a banana up my rear end would be a powerful optional way of jacking off. reinventing no they don't you troglodyte please learn Haskell before complaining about it posted:There are some common design patterns that involve temporary breaks in an otherwise controlled set of side effects; for example, caching, in which a pure function has a strictly internal global variable that matches previously computed results to previously encountered arguments. (Output is still a strict function of input; only the amount of time necessary to compute the result is changed as a result of cache state.) Profilers need to be able to change certain pieces of local state, for example, a counter showing the number of times a function has been called, even if the function is declared const. Stack-based modifications temporarily change the global state, but then restore it. I know nothing about lazy evaluation or the ST monad (which was specified as early as 1993, for the love of God!). Please stab me with a greasy spork. reinventing wow, it's almost as if your goal is to track an additional layer of metadata about the values in your program to determine if they're suitable - allowed, rather -- to perform certain operations one might otherwise unquestioningly assume they support posted:Side effect control means that a const instance method must not change the object, and that a function called with a const argument must not change the argument. Using static typing to control constness, however, means that a function called with a const argument must not only refuse to change the argument, but if returning a reference derived from that argument, must return a result that the calling function needs to treat as const, whether or not the calling function has promised not to modify that argument. In C++, this is necessary; supposing that the calling function had promised to treat an object as const, passing it to a const function and getting back a non-const reference would create the potential for violation of side effect control. But it violates innocence; whether the side-effect-controlled function treats the argument as const can have an effect on whether the calling function must treat as const references derived from that argument. Get a type system Yud
|
# ? Jan 28, 2015 18:48 |
|
Christ, I couldn't get through half of that. It feel like what Clojure would be if Rich Hickey was an idiot.
|
# ? Jan 28, 2015 18:55 |
|
Germstore posted:Christ, I couldn't get through half of that. It feel like what Clojure would be if Rich Hickey was an idiot. That's not a bad comparison. Clojure has a lot of neat usability features you can implement in terms of the core language, and Flare has a lot of allegedly novel features that could be implemented in about ten lines of Python. Clojure's a small language that feels big and Flare is a smallminded language that feels bigheaded. By the way, if I missed explaining the Flare object model in the above, short version (by memory) is that everything's an object, variables are like objects, some objects are Values (which are like values in any other language) but others are Expressions (which are often Operands), which are a little like lambdas and a little like lazy values depending on what mood the type system is in, although there's no static typing, and to be honest they're actually syntax trees -- lists and other data structures are made of things that are similar to objects, or maybe they're just XML -- but objects are XML, and so is code -- and a name's metadata, which has fields, is separate from the name's associated value, which has fields, and the value of a thing (and sometimes its metadata) varies based on scoping rules specified both in the object, variable, or data structure itself (in which case it's called an interceptor) and on builtin rules assigned Java/C++like names like "const" and "semistatic" -- and those rules often even (i.e. in const) represent reversible transformations of the data that seem to last as long as the object is in a stack frame below the one where those modifiers apply, (although per another page Flare does not run on a stack but a tree, which is like a stack) unless code where the object was bound to another name without those modifiers is allowed to run (via closures or otherwise), this is not a type system for it occurs at runtime, dynamically, wave of the future, stop making fun of me, it's too confusing for you, give us your money. Krotera fucked around with this message at 19:21 on Jan 28, 2015 |
# ? Jan 28, 2015 19:06 |
|
I am amazed that someone would seriously succumb to "Not Invented Here" paranoia due to freakin' Python. Oh no! I wanted to try my hand at some hardcore AI stuff and all the functionality I needed wasn't there at all. I had to write "import sklearn". That's a whole 14 characters! How am I supposed to implement an omnipotent benevolent AI using this thing?
|
# ? Jan 28, 2015 19:20 |
|
Triple Elation posted:I am amazed that someone would seriously succumb to "Not Invented Here" paranoia due to freakin' Python. Oh no! I wanted to try my hand at some hardcore AI stuff and all the functionality I needed wasn't there at all. I had to write "import sklearn". That's a whole 14 characters! How am I supposed to implement an omnipotent benevolent AI using this thing? IIRC his rationale went like this: - "annotative" programming (the object model I described in the above post) is too important not to have - Python is bad at expressing self-modifying code Speaking of the second point. code:
|
# ? Jan 28, 2015 19:28 |
|
Pavlov posted:Right, the loving dust specks. That might be a good thing to throw at my guy.
|
# ? Jan 28, 2015 19:40 |
|
I'm beginning to get the impression that this Yud character is something of a buffoon.
|
# ? Jan 28, 2015 19:42 |
|
Dr Cheeto posted:Does he ever even update his priors? Like, for all the sloppy blowjobs he gives Baye's he seems to be pretty bad at taking advantage of its greatest strength. He's not like out-and-out hostile to the scientific method, it's more that whenever the science doesn't support his fantasies he takes the lofty position of the man from the year 40,000 softly chuckling to himself about how those savages used to believe the world worked. See also: computer simulation that effectively creates another, larger universe in order to get extra processing power; cryonics; how the human brain works; how AIs might work; Moore's Law as a more absolute law of physics than the mere properties of electrons. He doesn't feel the need to actually support his extremely specific and wrong claims because history will inevitably vindicate him and prove everyone else wrong without further effort on his part; being wrong and an idiot to the 21st century is no biggie because if folks from the 21st century are so smart why aren't they god robots A Wizard of Goatse fucked around with this message at 21:13 on Jan 28, 2015 |
# ? Jan 28, 2015 20:57 |
|
A Wizard of Goatse posted:He's not like out-and-out hostile to the scientific method, it's more that whenever the science doesn't support his fantasies he takes the lofty position of the man from the year 40,000 softly chuckling to himself about how those savages used to believe the world worked. See also: computer simulation that effectively creates another, larger universe in order to get extra processing power; cryonics; how the human brain works; how AIs might work; Moore's Law as a more absolute law of physics than the mere properties of electrons.
|
# ? Jan 28, 2015 21:28 |
|
That was amazing, thank you. One more problem is that Yud's wonderful innovative system of Planes (which he has to contrive awful syntax for, ugh) is literally just the Protocol Buffer's Message Set, which Google open-sourced a while ago. And this is (one of, oh god so many reasons) why you don't base your language on a serialization method: someone comes up with a better one. Another, for those of you who are technical and following along with this Flare thing: can you *imagine* versioning, any versioning at all, language or class or individual object versioning, in a system based exclusively on XML serialization? Protobuf has a well-specified set of allowable forward changes to keep stored old protobufs wire-compatible with new ones. XML... Does not. And *everything* is XML. The Flare code is XML, the data is XML, the interpreter state is XML, the "domules" are XML, and it's all cripplingly unversioned. Here is Yud, in the hilarious future where he had a work ethic and actually wrote this stupid thing. http://stackoverflow.com/questions/2014237/what-are-the-best-practices-for-versioning-xml-schemas I love how he has got the t-shirts planned out already. One of them says "It's Really Written In Flare". Can you guess why? It's because this language is going to catch on, everywhere, and everyone will choose to abandon their language wars and unite together under the banner of Flare. Linus Torvalds, who to this day has resisted the scourge of email with markup in it, will assent to maintenance of the kernel in Flare, instead, because it's just so much better. And Flare is so expressive that all existing code in every language will be losslessly converted to and from Flare whenever a Real Programmer has to deal with The Unenlightened. So the code might be *committed* in C, but it was *written* in Flare. And he hasn't thought about versioning. In a language built on a serialization method. SolTerrasa fucked around with this message at 21:59 on Jan 28, 2015 |
# ? Jan 28, 2015 21:56 |
|
If we're talking about strange Internet people and programmers, here is a link to Richard Kulisz.quote:So, Smalltalk, LISP and Self ==> OO + real + objects + matter. Java, C++ ==> dead crap + fake + insubstantial + ectoplasm. Also, OO <=> Good, and Java <=>; Bad. The reason Java and C++ prevailed and OO lost is because most people are retarded brain-dameged idiots incapable of grasping OO. Just like they're incapable of grasping Goodness is the reason why we have capitalism and coal and disease and poverty and wars and death. Bad to the retards is "Good Enough". This is the Worse Is Better crowd. He doesn't like physics. Or the rest of academia. Or feminism. Or Yud. Or you. Likes D&D though.
|
# ? Jan 29, 2015 00:02 |
|
I am deeply confused by this whole... Thing. I am no Big Shot Developer, but from my limited experience, when you implement a feature it is roughly a process of three stages: 1. you have to understand what you want to do, 2. you have to understand how and why what you want to do is possible/feasible to implement in practice, 3. you get to the gory details of implementation (of course you need to plan that out too but you see what I am getting at). Yudkowsky's challenges lie with phases 1 and 2. He wants to create a "Friendly AI" but he doesn't yet have a rigorous definition for what that's supposed to mean, and the foggy concept he does have, he has no idea how or why you could put together an implementation to actualize it. He has shreds of ideas, fragments of intuition. This is what's called a research problem. Basically at this stage you are reading up on the literature, looking for known solutions on Google and StackOverflow, trying to grasp for relevant knowledge you do have that might apply by analogy. You're squeezing your brain for brain juice, you're opening up your mind to let the inspiration in. As a rule, programming languages are for phase 3. They do not inspire you in this sense. If you already know more or less what you want to do, a programming language will be a good tool to implement it with or a bad tool, but generally speaking it won't give you this Big Idea you're looking for. Sure, some programming languages have a Big Idea in them, and sure, for some problems it must have been the breakthrough- the idea the problem was waiting for. But you don't GET an idea like that by virtue of designing a programming language. That's cargo cult science. You're going to have to, well, think of the idea yourself. And even if you do, the idea is what will matter, and you would be able to implement it in any language you choose, it's just a matter of convenience. What? Self-modifying code? Assembly Language has that already - in its most beautiful, raw, terrible form. Does the Tao of Assembly provide one with the epiphany of how to make sure the AI does not turn everyone into paperclips?
|
# ? Jan 29, 2015 00:38 |
|
Triple Elation posted:I am deeply confused by this whole... Thing. I am no Big Shot Developer, but from my limited experience, when you implement a feature it is roughly a process of three stages: 1. you have to understand what you want to do, 2. you have to understand how and why what you want to do is possible/feasible to implement in practice, 3. you get to the gory details of implementation (of course you need to plan that out too but you see what I am getting at). I think the thing is, he has no idea how to even start writing an AI, but designing a programming language is something you can write pages and pages about, creating the illusion of progress. It's a gigantic waste of time that could never amount to anything, but it probably feels much better to do that than to sit around
|
# ? Jan 29, 2015 01:20 |
|
HPMOR got updated, and this time Yud is promising an update on 2/13. Can't wait for the inevitable "Sorry, got distracted with world saving AI research, need someone with a beach house cottage to foster me for free while I finish it FOR REAL this time"
|
# ? Jan 29, 2015 03:20 |
|
Sham bam bamina! posted:Have you considered just reminding the guy of the sheer volume of Yudkowsky's certifiable bullshit? The cryonics, the belief in omniscient/omnipotent AI, the eternally unfinished Harry Potter magnum fanficus (despite the loving personal cabin reserved for its completion), the worthless "research institute" and its attendant "charity", "Timeless Decision Theory" and the obvious credence that Yud gives to Roko's Basilisk, the abuse of Knuth's notation, the terminal misapplication of the Bayes Theorem, the... hell, there's too much garbage to list! I talked to the dude for like 5 minutes. I don't want to make a big project out of this. He just asked for a specific example from Yud's writing and thought I'd find him a particularly good one and hope he can figure out the rest.
|
# ? Jan 29, 2015 03:35 |
|
I'm glad that we've finally gotten to the real juicy dirt on this guy, namely that he programs badly (?)
|
# ? Jan 29, 2015 03:38 |
|
It'd be less notable if he hadn't dedicated his life to the holy grail of programming.
|
# ? Jan 29, 2015 03:59 |
|
Pavlov posted:I talked to the dude for like 5 minutes. I don't want to make a big project out of this. He just asked for a specific example from Yud's writing and thought I'd find him a particularly good one and hope he can figure out the rest. Sham bam bamina! fucked around with this message at 04:06 on Jan 29, 2015 |
# ? Jan 29, 2015 04:00 |
|
I'm writing an Eliza bot! It's not in Flare! I'm such a rebel.
|
# ? Jan 29, 2015 06:03 |
|
|
# ? May 4, 2024 13:59 |
|
A Wizard of Goatse posted:I'm glad that we've finally gotten to the real juicy dirt on this guy, namely that he programs badly (?) 1. It's not that he programs badly; it's that he has taken it upon himself to give the world another programming language. That's a significant undertaking. 2. It's not that he has taken it upon himself to give the world another programming language; it's that he came into it with all the smug eagerness to market and toot the horn that you might possibly expect of an expert who has struggled with a problem for a long time and has finally had a Really Good Idea and put together a working Proof of Concept. Except he's not an expert, his idea is apparently not that good, and it's not even a significant step forward for the problem he's tackling. If he had approached this whole thing with a little more humility, we would be having a completely different discussion. Yudkowsky, in a better universe posted:Hey guys, I'm not an expert on programming languages, but I thought it would be a really nice challenge for me to try and put one together. My area of work is artificial intelligence so I want something that will be really natural to work with in that domain, something like Prolog maybe. We can make a community project out of it. Here's the git repo, here's a discussion thread, let me know what you think about the design. This is an amateur effort so don't take it too seriously, but I really hope it will become really cool and useful eventually!
|
# ? Jan 29, 2015 16:04 |