Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
feedmegin
Jul 30, 2008

Sapozhnik posted:

maybe it's different inside apple idk, i haven't come across any major open source c projects that abuse preprocessor macros to that extent.

This is from a few weeks back, but I'm guessing you've never done any Gtk/Gnome programming?

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

rjmccall posted:

m:n threading is when the language gives you an explicit thread abstraction but it's just not implemented 1-1 using lower-level threads. typically this is because the language believes, probably with good reason, that its threads are much more lightweight than the lower-level threads

To be fair I'm mostly aware of m:n threading in the context of Unix and Posix threads. The theory here is that userspace thread switching is nice and fast (no system calls!) but it can only take advantage of one actual processor core; meanwhile kernel threads are expensive (context switches!) but can run on multiple cores. So the idea in e.g. Solaris and other commercial Unices and the very first attempts at proper Posix threading on Linux back in like the 90s was to have one 'real' thread per CPU more or less and schedule an arbitrary number of threads on the real threads using a user-space scheduler. Best of both worlds.

Turns out doing this efficiently is way harder than people thought. Linux ended up just making kernel threads as lightweight as possible instead, and now even e.g. Solaris has followed suit in its latest versions; one Posix thread is just one kernel-level thread and it's all nice and simple.

feedmegin
Jul 30, 2008

redleader posted:

so the madmen who work on webkit are thinking seriously about removing one of js's few good features

i like how they're planning on introducing a bunch of really low-level threading primitives into js, rather than a set of less footgunny concurrency abstractions.

Ahahaha I look forward to the combination of your average JavaScript programmer and what basically seems to be the barebones Posix threads model. Gonna be glorious.

feedmegin
Jul 30, 2008

rt4 posted:

frankly learning how to use lisp languages has eliminated my patience for learning any other syntax

same, except malbolge

feedmegin
Jul 30, 2008

eschaton posted:

Pepperidge Farms remembers

this was only on systems that didn’t have native threads right, because Solaris had them by the mid-1990s, as did many other Unixes (they just didn’t all have the pthreads API on top yet)

hell the classic Mac OS even had native threads back then, both cooperative (Thread Manager on 68K & PPC) and preemptive (Multiprocessing Services and PPC)

green threads as a concept should’ve been dead by 1995 or so

Green threads are cooperative threads effectively, I'm not sure what using MacOS's version of those instead would have brought to the table.

Early Java on Solaris used green threads, too. If you want to have lots and lots of threads, they're a lot more lightweight than native threads, especially Solaris' back then, and if you don't have multiple CPUs/cores native threads aren't actually buying you that much. Also, can be easier to implement (on the one hand you gotta use async I/o everywhere, on the other your JVM/runtime/native code doesn't have to be native-threadsafe)

feedmegin
Jul 30, 2008

ulmont posted:

i just finished an article that disagrees with you:


maybe you could ask the author why they mentioned sizes several times?

I mean a) he does mention both and maybe b) link the article? Because right now this is just some rando's opinion.

feedmegin
Jul 30, 2008

Xarn posted:

What C gives you is a simple syntax (don't confuse this with simple semantics, or, god-forbid, simple to use language) to use for coding against a virtual machine that doesn't really exist and is, in fact, an attempt to codify the least common denominator of different CPU architectures. Treating it as a language that actually represents the underlying machine is laughable and a good way to get burned.

Different CPU architectures as were common circa 1975, even :sun:

feedmegin
Jul 30, 2008

cinci zoo sniper posted:

fwiw nvidia experience is node.js

Which of course is not the same thing as 'the driver' :shobon:

feedmegin
Jul 30, 2008

Max Facetime posted:

responsiveness, does not randomly fail to download half of its UI,

Doesn't eat hundreds of megs of ram, doesnt take 10a of seconds just to start up...

feedmegin
Jul 30, 2008

carry on then posted:

are there any languages in production use that have or at least support keywords in various languages?

To be honest, as I understand it it's pretty much expected that if you're a programmer, worldwide, to be any good you've got to know English - at least well enough to read it. Pretty much all the technically references, Stack Overflow, etc etc out there is going to be in English. At that point, having language keywords in English as well (and everyone worldwide being able to instantly understand what they're looking at in anyone else's source) is just kind of the way to go. I could see it being different for something like AppleScript that's supposed to be used by 'the common man' but anything above that? English.

feedmegin
Jul 30, 2008

JawnV6 posted:

there is no way your C# program is leaking memory after program termination

Technically I can think of ways of doing that, e.g. System V shared memory segments :sun:

feedmegin
Jul 30, 2008

Peeny Cheez posted:

If you must persist on doing this: ceterum censeo iavascripto esse delendam.

ITYM persist IN doing this :smuggo:

feedmegin
Jul 30, 2008

Blotto Skorzany posted:

why would macos lack strace? solaris (where they got dtrace from) didn't get rid of strace

You mean truss, strace is some STREAMS bullshit from the early 90s there.

All I know about dtrace is it doesn't seem to work with the output from my compiler (which creates statically linked binaries that don't link in anything, even libc, instead doing raw syscalls) buuuut my setup is old so I might not even have 64 bit dtrace I guess? and also obviously this is a hyper niche case. Similarly the Linux subsystem for Windows won't run the Linux output.

feedmegin
Jul 30, 2008

Plorkyeran posted:

dtrace definitely can log raw syscalls on macos

In general, yes, but it seems to want to hook or attach to the process in some way first and instead, obscure error message I don't recall off the top of my head.

feedmegin
Jul 30, 2008

ratbert90 posted:

My friend who has coded with C++ for over a decade just learned that structs can have private/public members. :allears:

For the sake of his sanity don't tell him there's also 'protected' :sun:

feedmegin
Jul 30, 2008

mystes posted:

As far as I can tell, that link only shows that blockchain now has a ton of webass hot garbage in it.

One of his slides has 'Solution: gas' on it.

That sounds like the correct response to blockchain people to me :hitler:

feedmegin
Jul 30, 2008

carry on then posted:

lol i aint' goin back to a thinkpad bub

I literally use a refurbished Thinkpad as my personal laptop, cheap and built like a tank. Running Kubuntu of course.

feedmegin
Jul 30, 2008

prisoner of waffles posted:

We're going to keep on improving our build discussions until each one is byte-for-byte reproducible

We're going to have a new build discussion every night and run the full test suite on it

We should write our own goon build system

feedmegin
Jul 30, 2008

prisoner of waffles posted:

ugh, don't know about you but I can't stand GBS

nice

feedmegin
Jul 30, 2008

bob dobbs is dead posted:

"we ship once every 6 months"
a real thing i have heard while giving a real interview

i guess that's fine if you make like widgets or shrinkwrapped software

Lots of people still make these things. Microcontroller firmware isn't really on an ~agile release cadence~

feedmegin
Jul 30, 2008

JawnV6 posted:

some place like the daily wtf had an interview horror story where the candidate was insisting that finally() blocks might not run and the interviewers were confident that one would always run

"what if I go up to the machine and unplug it? how does your finally() block run then?"
interviewers blanched, ended the interview

Someone's never heard of a UPS :smuggo:

feedmegin
Jul 30, 2008

Soricidus posted:

yeah, we don’t bundle a jre and literally half our support calls are resolved by getting people to replace their random old jre with the latest oracle java 8. it’s bad. it was just about ok for the long java 5/6 years but no way it’s viable today

luckily we’re finally able to update to modern java so bundling openjdk 11 is just round the corner!

But 'write once run anywhere' :ohdear:

I mean if you're shipping your own JRE for every platform you support you might as well write your GUI app in native code with Qt or something and be smaller, faster and less of a memory hog. What's a VM even buying you, really?

feedmegin
Jul 30, 2008

carry on then posted:

god there are some amazing takes today.

yeah, why the hell would anyone just slot in a completely 1:1 compatible open source replacement runtime when they could plunge millions of dollars and years of development effort into rewriting the whole thing in loving c++

I mean in a wider conceptual sense I guess. Yeah, if you already have your Java GUI app then I agree, it's just funny to me because Java's whole initial selling point back in the 90s was specifically not having to do this thing.

feedmegin
Jul 30, 2008

pseudorandom name posted:

I'm specifically asking about running destructors and freeing memory off the main thread in a non-GC scenario.

I could see issues with running non-trivial destructors on some random rear end thread other than the one it was created in tbh

feedmegin
Jul 30, 2008

Lutha Mahtin posted:

does this compile to the CLR bytecode (or whatever it's called) that "regular" languages like c# compile to? or can you do wacky mix and match stuff with both CLR and traditional machine code

The latter.

feedmegin
Jul 30, 2008

Cybernetic Vermin posted:

still, i can't name a single thing i *know* was written in it.

Multics :sun:

feedmegin
Jul 30, 2008

Sweeper posted:

making stacks can be expensive if you use the for control flow, omitting the stack is fine then. lots of people do try {} catch (npe) {} I assume

Isnt using exceptions for regular control flow (as opposed to, you know, exceptional events) generally considered a p bad idea?

feedmegin
Jul 30, 2008

AggressivelyStupid posted:

I like Qt but I don't want to use their editor

You don't have to? I do my Qt dev in emacs.

feedmegin
Jul 30, 2008

echinopsis posted:

part of using django to do what im doing is diving into the command prompt on this mac but tell me is the built in terminal a piece of poo poo and do most people use a new one or wat

Could be worse, early OS X defaulted to tcsh.

feedmegin
Jul 30, 2008

echinopsis posted:

if youre writing a compiler itself do you have a lookup table or some kind of reference for actually making machine readable executables?

Thats usually more a job for the linker, but you can look up the specs for executable files for your OS easily enough - ELF for most Unixes (except AIX lol, and MacOS uses MachO), PE-COFF for Windows. They're just another container file format. My pet nerd project and Github CV bait is a compiler that can target all of those (except AIX because, again, lol)

feedmegin
Jul 30, 2008

eschaton posted:

it’s not like the company’s own software ran or built on Solaris

back in the mid-1990s some PowerPC stuff was built using xlc on a fleet of AIX RS/6000 running AIX, but that was all handled semi-transparently (it looked just like building locally) and eventually someone wrote a shim to run XCOFF binaries like xlc directly under MPW

Not https://en.m.wikipedia.org/wiki/Apple_Network_Server (or before that A/UX) ?

feedmegin
Jul 30, 2008

BobHoward posted:

this is so true that, with modern transistor densities, when there’s a call for a state machine that’s complex but also needs hard real-time guarantees, chip designers often throw in a whole embedded CPU core and have a software engineer implement the state machine instead. every modern gpu or cpu or cellphone chip is sprinkled with dozens of cortex-M0 class microcontrollers. these are often not even touted in marketing materials since they typically cannot run user supplied code. they’re just testament to the difficulty of using hardware design techniques to solve complicated problems.

Truth and also literally my previous job. Makes things much more flexible than hardware too since your M0 can potentially talk to and be configured by the host driver. An M0 is a fraction of the full size on die of modern chips, like a tenth of a millimetre squared or something.

feedmegin
Jul 30, 2008

Soricidus posted:

the english, please, not the british. plenty of the whipping happened to british people who unreasonably insisted on trying to speak cornish, welsh, gaelic, etc.

Plenty of those Scots, Welsh etc were also fully on board with the empire and down with eg whipping brown people in the New World, too, though.

feedmegin
Jul 30, 2008

Notorious b.s.d. posted:

writing your own process creation / thread management api just means somebody else is going to come along and smear a horrible posix-compatible fork and pthreads implementation overtop it anyway

might as well embrace the badness

You mean 'nearly compatible but with horrific edge cases also the performance will absolutely suck all the balls' hi cygwin

feedmegin
Jul 30, 2008

Sapozhnik posted:

making it a pain in the rear end for an ihv to maintain a linux driver that is not part of the upstream project is not a bug, it's a feature. only nvidia has managed to swim against that tide and it's still a giant pain in the rear end for everybody to deal with.

Plenty more than them unfortunately, see also basically every embedded gpu manufacturer

feedmegin
Jul 30, 2008

Hmm. While GDI was indeed intended to be usable such that you could open a graphics context on a printer and draw on it as you would a screen, adjusting for DPI etc - that isn't the same as 'using the PostScript imaging model', that's just device independence. Indeed you will struggle to see a Bezier curve anywhere in classic GDI, X11 or QuickDraw, it's all your usual straight lines, rectangles, pixmaps and bitblts. Outside of fonts I guess but that's hidden from the programmer and early fonts were just fixed size bitmaps in any case.

That's why Display PostScript was such a fancy high end big deal in the 90s, because now you could do all this stuff in your GUI (if you happened to be loaded and own a Sun workstation running NeWS or a NeXT box)

feedmegin fucked around with this message at 15:26 on May 10, 2019

feedmegin
Jul 30, 2008

eschaton posted:

like I just got a couple of VAX/VMS Consolidated Distribution packages and one of the layered products is a PostScript renderer that outputs sixel graphics for the DEC VT240 graphic terminal

Thats 'fancy high end' though, in a context where we are discussing GDI on Windows.

feedmegin
Jul 30, 2008

Shaggar posted:

being too poor to afford a windows license is understandable, but many people use Linux because they're cheap and then wonder why things suck. good tools cost money.

Hi Steve Ballmer, hows life back in 1998? You might want to buy this cool new startup called Google just sayin'!

feedmegin
Jul 30, 2008


'
Taken together, the .NET Core and Mono runtimes have a lot of similarities (they are both .NET runtimes after all) but also valuable unique capabilities. It makes sense to make it possible to pick the runtime experience you want. We’re in the process of making CoreCLR and Mono drop-in replacements for one another. We will make it as simple as a build switch to choose between the different runtime options.'

Uhhh why not just pick one and integrate whatever from the other?!

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

BobHoward posted:

:allears: I need links


layers of irony for those unfamiliar: vhdl was created by the United States DOD as its official HDL, complete with syntax directly lifted from Ada, because DOD. then us industry (outside of defense contractors) basically ignored it and went with verilog instead, while for reasons unclear to me european private industry went the opposite way

Fwiw Europe's moving over to (System?) Verilog these days. ARM definitely has.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply