|
Phobeste posted:we're bringing up our boards for a product that won't launch until 2017, so we decided to get ahead of the game and pick something that would still be early in its lifespan then Phobeste posted:we have this soc that we're figuring out at the same time as ti and goddammit the payoff better be worth it (it will be and i'm also the one who pushed for this so i should shut the gently caress up and do my job) i hope he has a scope with more than 1 color!! Barnyard Protein posted:context for why the intel drop-outs we have act so shell shocked
|
# ? Oct 30, 2015 04:56 |
|
|
# ? Jun 13, 2024 03:57 |
|
JawnV6 posted:im going to a fae's lab tomorrow :hi5: dont start dancing
|
# ? Oct 30, 2015 06:05 |
|
JawnV6 posted:so u looked at stmicro's line of 10+ year lifespan products w/ guaranteed mfg and said "no, no thank u" i shouldn't have said shell shocked, that wording implies that there is something wrong with them. they are the minority that acts viscerally upset when a part comes out hilariously broken, everyone else is like "oh well, lol"
|
# ? Oct 30, 2015 06:25 |
|
I wish I could have a fun embedded job like you guys Me? Oh I have a lovely hand crimped ftdi rs422 cable at a customer site that's giving lots of bit errors somehow. Maybe that's the problem? Idk. No remote access other than pushing out another ota software release that logs even more irrelevant garbage to syslog.
|
# ? Oct 30, 2015 06:28 |
|
we are not so different, you and i
|
# ? Oct 30, 2015 07:06 |
|
actually that's the Embedded Experience (tm)JawnV6 posted:so u looked at stmicro's line of 10+ year lifespan products w/ guaranteed mfg and said "no, no thank u" we are best buddies with ti, wanted an arm with graphics acceleration and also their pru stuff to save another discrete device for realtime stuff, makes a lot of sense. it'll be a good decision eventually. i had a webex with their engineer who's doing the linux bringup for that realtime peripheral and he was using osx with sublime text 2 for everything. i gave a small nod of approval
|
# ? Oct 30, 2015 13:34 |
|
Mr Dog posted:I wish I could have a fun embedded job like you guys this is about halfway to being a nihilist arbys tweet, i love it
|
# ? Oct 30, 2015 13:35 |
|
Phobeste posted:it'll be a good decision eventually or famous last words
|
# ? Oct 30, 2015 13:56 |
|
someone was trying to explain to me the difference between traits, classes (abstract or otherwise), and how super works with all of these guys and he told me not to read the standard because its too convoluted. this was after trying to understand this poo poo via stackoverflow. i just read the chapter on classes and objects in the scala spec and gently caress that its super straightforward, if not awful to reason about in complex cases.
|
# ? Oct 31, 2015 03:03 |
|
what's the benefit of a trait in scala though? it's like a half interface/half template, neither and both really curious
|
# ? Oct 31, 2015 04:42 |
|
so my general frustration is that afaict things in scala are explainable, but the explanation doesn't actually give you a feeling for why it was done that way. i think that is why people explain features like self-types, traits, etc. by how they're idiomatically used and not how they actually work and relate with other parts of the language. i've noticed a similar problem with a few other langs, imo. afaict, traits are the only mechanism that allow for incomplete types. to start code:
code:
by segregating traits into a separate concept with additional restraints, you can safely leave things incomplete. concrete classes have to be concrete, abstract classes only require implementation in derived classes, and because of the extra limitations on traits always requiring a parent class, you are guaranteed that super.run will resolve to some implementation.
|
# ? Oct 31, 2015 05:34 |
|
also i still don't truly get self-types and how thiscode:
code:
quote:The sequence of template statements may be prefixed with a formal parameter definition and an arrow, e.g. x =>, or x:T =>. If a formal parameter is given, it can be used as an alias for the reference this throughout the body of the template. If the formal parameter comes with a type T, this definition affects the self type S of the underlying class or object as follows: Let C be the type of the class or trait or object defining the template. If a type T is given for the formal self parameter, S is the greatest lower bound of T and C. If no type T is given, S is just C. Inside the template, the type of this is assumed to be S. which doesn't actually complete my mental understanding of all of this. if tef or some other phd dropout can explain it to me i'd be really, really happy
|
# ? Oct 31, 2015 05:41 |
|
still not getting the benefit of traits. you could just make a class in normal java with an abstract method for implementation. that's why i said half-template-seeming. the only difference in scala is i can inherit or implement several of these traits, however you want to look at it. but you could that in java if you organize yourself. i guess this way is more readable for some kind of funky multiple template inheritance.
|
# ? Oct 31, 2015 05:52 |
|
i can describe how traits are stronger than jdk8 interfaces and abstract classes and the interesting things you can do with them that are much more difficult to do in other languages but i've yet to be sold that these interesting things are good things.
|
# ? Oct 31, 2015 06:26 |
|
interfaces are good. in most cases abstract classes are not. sometimes -- with time and without knowing it at first -- you find a good use of an abstract class or an abstract method in a class. but multiple? ehhhhh.... i am not a scalabyist and don't know enough about the language, but on the surface this looks like promotion for some bad practice in almost any design pattern. or i'm really missing something obviously beneficial. hoping dude with horse anime avatar deep into scala pitches in on this.
|
# ? Oct 31, 2015 06:37 |
|
FamDav posted:if tef or some other phd dropout can explain it to me i'd be really, really happy in an extension you normally only get to assume that "this" has the type that you're extending; think of it as saying "here's some more stuff an object of this type can do". that arrow clause makes it mean "here's some more stuff it can do, but only if it also has this other type". in this case, you're saying that class X has a run method, but it's only usable on instances that also have the trait A for some reason. the new is just creating an instance of an ad-hoc subclass of X that has the trait A. A overrides the run() from the Action trait, which is satisfied by the implementation from X
|
# ? Oct 31, 2015 07:31 |
|
rjmccall posted:in an extension you normally only get to assume that "this" has the type that you're extending; think of it as saying "here's some more stuff an object of this type can do". that arrow clause makes it mean "here's some more stuff it can do, but only if it also has this other type". in this case, you're saying that class X has a run method, but it's only usable on instances that also have the trait A for some reason. the new is just creating an instance of an ad-hoc subclass of X that has the trait A. A overrides the run() from the Action trait, which is satisfied by the implementation from X what's confusing me now is that either 1. the self-type is affecting class linearization 2. the self-type allows a reference to the subclass because as far as i can understand the linearization could either be code:
code:
code:
|
# ? Oct 31, 2015 08:47 |
|
edit: My previous explanation was incomplete or incorrect. I believe the answer is in your use of abstract override, which causes super to be bound dynamically to whatever super-class or trait is mixed in beforehand that has implemented the appropriate call to super. http://www.artima.com/scalazine/articles/stackable_trait_pattern.html FamDav posted:what's confusing me now is that either Volte fucked around with this message at 12:12 on Oct 31, 2015 |
# ? Oct 31, 2015 12:06 |
|
pepito sanchez posted:what's the benefit of a trait in scala though? it's like a half interface/half template, neither and both traits are a way to do inheritance without putting virtual on things.
|
# ? Oct 31, 2015 14:58 |
|
FamDav posted:what's confusing me now is that either L(X with A) = L(A) + L(X), where + drops things on the left that are present on the right. so it's A, Action, X, AnyRef, Any. i don't know why you don't think that works out, A's method is super-calling X's. if the linearization was X, ..., A, ..., the A method wouldn't get called at all abstract override just means that the method definition is only complete if it actually overrides a complete definition, generally because it calls super
|
# ? Oct 31, 2015 17:18 |
|
i did have the relationship slightly wrong last night: the self-type stuff isn't a conditional requirement, it's a hard requirement for subclasses. so it's basically a way of saying "this trait will be mixed in with this class, but i'm not actually mixing it in yet because i want it to override my implementations instead of the other way around"
|
# ? Oct 31, 2015 17:29 |
|
rjmccall posted:L(X with A) = L(A) + L(X), where + drops things on the left that are present on the right. so it's A, Action, X, AnyRef, Any. i don't know why you don't think that works out, A's method is super-calling X's. if the linearization was X, ..., A, ..., the A method wouldn't get called at all yeah, I'd come up with A, Action, X, AnyRef, Any. What confuses me with that derivation is quote:A overrides the run() from the Action trait, which is satisfied by the implementation from X because in this case Action was never a parent of X. also just for fun here's the output when i remove the self-type. quote:method run in trait A of type ()Unit cannot override a concrete member without a third member that's overridden by both (this rule is designed to prevent ``accidental overrides'') so i agree with you that the compiler has X#run override Action#run. i do understand why (new X with A).run() will call A#run() first. and given those two things i understand why i see the behavior i see. i just still do not understand why X#run is allowed to override A#run in the statement list with the self type .
|
# ? Oct 31, 2015 19:20 |
|
FamDav posted:which doesn't actually complete my mental understanding of all of this. if tef or some other phd dropout can explain it to me i'd be really, really happy for the record, i dropped out of a bachelors and got a https://en.wikipedia.org/wiki/Diploma_of_Higher_Education
|
# ? Oct 31, 2015 19:26 |
|
i can now add "trait" to the list of "oo terms that have different semantics across languages" i guess. the whole point of traits was composition, not delegation for inheritance. it seems anathema to call the superclass inside a trait.
|
# ? Oct 31, 2015 19:32 |
|
i agree that it's suspect. my guess is there's an undocumented rule also permitting overrides in the presence of a self-type constraint on the overridden method's class/trait, which seems reasonable language-wise
|
# ? Oct 31, 2015 19:40 |
|
rjmccall posted:i agree that it's suspect. my guess is there's an undocumented rule also permitting overrides in the presence of a self-type constraint on the overridden method's class/trait, which seems reasonable language-wise i can be satisfied with this. now that i've read quite a bit of the standard i can appreciate and maybe even like the language.
|
# ? Oct 31, 2015 19:54 |
|
tef posted:i can now add "trait" to the list of "oo terms that have different semantics across languages" i guess. the whole point of traits was composition, not delegation for inheritance. it seems anathema to call the superclass inside a trait. yeah the way a trait is described above hurts my head a trait is something that a number of classes can share that's designed to apply a certain attribute or type of functionality to that class, i.e. code:
composition owns and makes so much more sense than Car extends Vehicle Blinkz0rz fucked around with this message at 22:14 on Oct 31, 2015 |
# ? Oct 31, 2015 20:09 |
|
I'm a PhD dropout now! I don't understand scala's type system at all but it does frustrate me a but every day. Compose might as well not exist because literally every time I've used it, type inference has failed. Automatic value discarding (I. E. Any value can silently become unit has caused me several problems when accidentally typing == instead of shouldBe. And I can't even turn the warning on because most java methods return a reference to themselves to allow chaining so our codebase has hundreds if not thousands of instances of calling a method for its side effect and ignoring the returned reference.
|
# ? Oct 31, 2015 22:02 |
|
gonadic io posted:Automatic value discarding (I. E. Any value can silently become unit ew that's gross
|
# ? Oct 31, 2015 23:20 |
gonadic io posted:I'm a PhD dropout now! As another grad student, I'm curious what made you decide to quit?
|
|
# ? Oct 31, 2015 23:35 |
|
FamDav posted:i just still do not understand why X#run is allowed to override A#run in the statement list with the self type . i'm not even 10% sure about this the traits in scala looks like a sort of abstract mixin more than a trait in other languages, it isn't flattened, it still builds a linear class heirarchy. like say you could parameterized superclass in java class <S> ATrait extends <S> { } class Foo extends ATrait<Superclass> { # Foo extends ATrait<Superclass> extends Superclass } and you could mix in multiple traits by One<Two<Three<A>>>. it's kinda like an abstract class without a superclass, but traits themselves can have those. we can add them into the imaginary java above, using sugar which transforms code like this trait ATrait extends AntotherTrait { ... } class Foo extends Superclass with ATrait { } into Foo extends ATrait<AnotherTrait<Superclass>>>, with some linear class hierarchy. in scala we build the list of superclasses by depth first search from right to left of the traits composed, and then de-duplicates the list, removing all copies and leaving the last item in place. this is similar to what python tried before using c3, i think. cf http://python-history.blogspot.co.uk/2010/06/method-resolution-order.html i think an issue here is that subclasses can't override the ordering of composed traits once included. if you have trait P extends A,B and trait Q extends B,A, and some trait Z extends P,Q, if you extend Z, you can't change the order of P and Q. i guess this preserves the ordering of the traits defined further up in the heirarchy, and c3 preserves the ordeing of the traits further down. i don't know why they aren't using c3. anyway, back to traits in this imaginary java, now you have this sugar to turn your extends S with T* into a class heirarchy, you can think about the abstract overrides, or incomplete traits. if we split every class into two hidden ones—the abstract override & everything else, when we linearise the class/superclasses, we prefix it with the abstract overrides trait order. or if you will, instead of a class heirarchy [Foo, A, B, C ....] we have [incomplete:A, incomplete:B, ... Foo, A, B, ...] traits in scala must explicitly name the traits (+their subclasses) it allows to have abstract overrides, which can be done by just doing 'X with A', but as we've seen, this fixes the order of composition of traits. on the other hand, if we set the self class to A, we're actually setting the self type to the greatest superclass of X with A, which allows us to opt into this behaviour without fixing the composition ordering. tl;dr i think the self type in scala in your trait examples are a workaround for a poor choice of linearization algorithm. tef fucked around with this message at 23:45 on Oct 31, 2015 |
# ? Oct 31, 2015 23:38 |
|
VikingofRock posted:As another grad student, I'm curious what made you decide to quit? I never should have done it in the first place. I loathed all of my dissertations, I don't have a passion for research, I don't want a PhD for its own sake. I said yes because it seemed like the easiest thing to do at the time. I stuck it out for 15 months or so: wrote my papers, spoke at conferences, volunteered for an absurd amount of teaching assistanting. But at the end of the day that wasn't enough and it caught up with me. Maybe if I had a better work ethic I could have done it while hating it, but that's not me.
|
# ? Oct 31, 2015 23:41 |
|
scala's linearization makes a class's linearization a pure suffix of its subclasses. that seems desirable to me, since it means that there's at least one point at which you can analyze the behavior of your code without knowing exactly how your subclasses are defined, which seems like a requirement for sane library design and hence sustainable software development. but then, i am one of those people who doesn't really care for endless dynamic bullshit, or at least is fine with telling people that they need to be a little more explicit to achieve it. so instead i will just say that this kind of linearization is probably an effective requirement of compiling to the jvm, because scala ultimately wants to be able to assemble classes into jvm classes, and this rule lets you do so without heavy abstraction penalties
|
# ? Nov 1, 2015 00:38 |
|
Ahhhh http://www.charlesetc.com/rust/2015/10/29/ quote:Rust is a unique language in that it deallocates memory on the heap without requiring the writer to call free, while at the same time having no need for a garbage collector. Rust has found its equivalent of the monad tutorial.
|
# ? Nov 1, 2015 00:54 |
gonadic io posted:I never should have done it in the first place. I loathed all of my dissertations, I don't have a passion for research, I don't want a PhD for its own sake. I said yes because it seemed like the easiest thing to do at the time. That's completely understandable. Don't blame yourself at all (with the work ethic stuff). I'm in my fourth year of a doctorate in astrophysics and I've decided that I'm going to stick out the PhD (which will be done next year), but then I'm gonna leave academia for certain and probably try to get a job doing something interesting in silicon valley. Academia is exhausting and thankless, and is possible that silicon valley work is too but at least I'll be making a living and getting good benefits! Plus I think the faster problem turnaround in industry will keep things interesting--I've been working on the same project for three years now and I'm only just now getting a paper to show for it.
|
|
# ? Nov 1, 2015 01:07 |
|
I got to file a JVM bug with Oracle because this compiles but doesn't run:code:
Edit: Added "static" to A.print(). CPColin fucked around with this message at 05:07 on Nov 1, 2015 |
# ? Nov 1, 2015 01:14 |
|
rjmccall posted:endless dynamic bullshit the c3 algorithm comes from dylan
|
# ? Nov 1, 2015 02:26 |
|
the argument for c3 is better made here i guess http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.19.3910&rep=rep1&type=pdf Object-oriented languages with multiple inheritance and automatic conflict resolution typically use a linearization of superclasses to determine which version of a property to inherit when several superclasses provide definitions. Recent work has defined several desirable characteristics for linearizations, the most important being monotonicity, which prohibits inherited properties from skipping over direct superclasses. Combined with Dylan’s sealing mechanism, a monotonic linearization enables some compile-time method selection that would otherwise be impossible in the absence of a closed-world assumption. and guess who is responsible for this shoddy piece of dynamic engineering Apple Computer, Inc., and Harlequin, Inc. and Ltd., supported the authors during the design of the Dylan language, when this work was undertaken.
|
# ? Nov 1, 2015 02:33 |
|
...are you expecting me to concede an argument just because i work for a company that once sponsored some research for a significantly different language that i'm saying doesn't give the best result here? linearization is just a way to pick an arbitrary solution to inheritance problems, and c3 linearization specifically breaks encapsulation by allowing external sources to inject random stuff unexpectedly into super delegation. that's fine when "super" delegation is really peer mix-in delegation, but error-prone and capricious in all the other cases anyway, you're not understanding the problem. the author of X really just wants to use a mix-in to automatically wrap their implementation of a method. c3 linearization doesn't help: they can't use the mix-in as a base because they'll override it, not the other way around. they could make a subclass that adopts it and say "use this instead of X directly", but they could do that with any linearization. the self-type thing lets them say that users of their class have to adopt the mix-in somewhere, but leaves them the flexibility to say how
|
# ? Nov 1, 2015 03:39 |
|
|
# ? Jun 13, 2024 03:57 |
|
CPColin posted:I got to file a JVM bug with Oracle because this compiles but doesn't run: no instance of B, so yeah, should be blowing up during the compile
|
# ? Nov 1, 2015 04:27 |