Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ExcessBLarg!
Sep 1, 2001

Sagacity posted:

Looks like Go is now the next language for the Rails/NodeJS hipsters to flock to. I wonder when they'll arrive at a language that has a strong type system.
So the OP is actually a great post. His comments on Node's terrible error handling is exactly what's wrong with Node when used for non-trivial projects, the kind of stuff that most noders just ignore.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001
Everyone's about the free food at Google but honestly it sounds like as good of a deal as a college meal plan, perhaps with better food. If you make more money, you can purchase breakfast at whichever trendy waffle shop you prefer.

It sounds like the perks that Google is really offering is convenience, as well as subtlely trying to encourage everyone to work longer hours. Again, sounds kind of like undergrad. That might be great if you're an early 20-something and want to extend that lifestyle. But if you're a little older, have family, or just want to socialize outside work, those conveniences are less meaningful.

Most other institutions (certainly the ones worth working for) also feature competent coworkers and to-help-you internal resource departments, those are hardly unique.

ExcessBLarg!
Sep 1, 2001

Subjunctive posted:

A relatively-major service at work uses Mongo, to the point of upstreaming major changes.
Doesn't Mongo have that fun AGPL license where you kinda have to do that anyways?

ExcessBLarg!
Sep 1, 2001

Plorkyeran posted:

If you're using one of those dumb JS MVC frameworks that has to be able to talk directly to a mongo server then you'd have to distribute the source,
Oh that explains it. The AGPL kind of makes sense for web server code since the user interacts "directly" with the web server. I didn't understand the utility of it for a database since I'd never intentionally expose my database directly to clients/public. But I guess that's a thing that people do, huh.

ExcessBLarg!
Sep 1, 2001
Fortunately there's a self-selection thing going on here. The folks experienced and competent enough to implement an operating system will recognize that it makes no sense to reinvent the wheel and will just contribute to Chrome OS, Firefox OS, Open webOS or whatever. The only folks who think this effort is a good idea will derail themselves long before they can make any actual progress.

As a bonus, it keeps the trendy incompetents from noising up mailing lists of decent projects.

ExcessBLarg!
Sep 1, 2001

PrBacterio posted:

You know, I'm trying over here but I frankly can't think of a reason why, in principle, an OS built on Javascript would have to be a bad idea on the face of it.
In some ways it's not, but the approach is wrong. Both Chrome and Firefox have effectively evolved into their own applications platform, especially the former with the Chrome Web Store. If you strip away everything but the browser, you have Chrome OS and Firefox OS. Both have a solid technical basis for moving in that direction too; the browser provides sandboxing and a security model that's a significant improvement over legacy desktop platforms. Much like iOS and Android apps, you can install Chrome apps that run offline/locally and all that good stuff, without fear that it will delete your files and fuckup your computer.

But both Chrome and Firefox OS don't try to completely replace userland components with JavaScript equivalents, especially the ones that users don't interact with directly. Instead, they build on a solid base of vetted opensource code. That's the part that Node OS is trying to replace, and there's no real technical advantage in doing it. It's a complete waste of time.

ExcessBLarg!
Sep 1, 2001

PrBacterio posted:

The discussion was about an OS with a userland written in Javascript, in the same way like Android's is written in Java.
One difference is that the components that Node OS is currently trying to replace are equivalent to the C/C++ Android code borrowed from BSD and other sources. Literally everything in Android that's actually written in Java and runs on Dalvik/ART is marked TODO in Node OS, with the foggiest idea of how to actually get there.

ExcessBLarg!
Sep 1, 2001

Ender.uNF posted:

As far as I know it lies in the MS Research death dumpster for the same reason most attempts at creating a new OS fail: everyone already has an OS that mostly works so there's no push to adopt anything better,
There's also the problem where the world runs on a shitload of legacy software. Can you run a legacy C library in this environment? If so, does it still protect you against all the traditional C-language vulnerabilities for which we've spent decades hardening traditional kernels? If either answer is no, yeah, you're not going to get wide adoption.

ExcessBLarg!
Sep 1, 2001

Ender.uNF posted:

True enough; I thought it would be interesting to support running legacy code or VMs in this kind of environment, but make them live in ring 3 with traditional memory protection and have a ring 0 VM manager/proxy handle messages on their behalf.
That's awfully a lot like microkernels running with a monolithic BSD service daemon. Next step is to cut out the microkernel.

Yeah, it's interesting stuff to think about, it's just too hard to move to towards.

I'm actually kind of amused we're doing the opposite now: writing applications against VMs hosted inside strict sandboxes (web browsers) that use a lot of message passing (in the form of REST calls). All of which is built on top of a base of probably-insecure-but-getting-better system and UI libraries that date back 30 years. And they said microkernels were slow.

ExcessBLarg!
Sep 1, 2001

NtotheTC posted:

I thought version tuples (or the bash equivilent) were a thing everywhere. Or is this just my spoiled python background?
Comparing version numbers is a pain in the rear end. Generally the "right" way is to split the version string into an integer tuple and perform integer comparison on each element until an unequal value is found. That's basically what GNU libc strverscmp does.

The problem is that version strings can also contain letters or other symbols where the meaning isn't obvious. Debian package versions are a good example of an utterly perverse, but at least generally consistent versioning scheme that has the ability to encode an upstream "decimal" version number internally while also providing for package versions and version epochs. I find most version strings conform to a subset of what Debian uses, and in the rare instances where I have to compare them in shell, "dpkg --compare-versions" is pretty nifty.

ExcessBLarg!
Sep 1, 2001

Corla Plankun posted:

Ok, I give up. Why are the first two lines bad?
At best it doubles the amount of text that has to be read because the code is essentially written twice.

But that style of commenting quickly becomes a headache when a group of people have to maintain that code. When the code changes, the comments have to be updated to reflect the changes, otherwise they become inaccurate. Or someone has to make the decision to delete the superfluous comments, which is extra work and might offend someone.

It's also kind of defeating. The purpose of comments is to call attention to portions of code that are counterintuitive or whose purpose is not obvious. Basically a comment says "pay attention here" to the reader. If everything's commented, things that actually need to be called out are lost in the noise. Also, generally speaking, well written code should be fairly clear and unambiguous in meaning most of the time, so excessive commenting is a sign that someone is not comfortable with their ability to write clear code.

Of course, it's a fine thing to do on a toy project if you're just starting to learn a language/platform and it aids in the learning process.

ExcessBLarg!
Sep 1, 2001

Corla Plankun posted:

I actually literally and unironically found Ruby to behave exactly like I expected it to when I was learning it. It's a super fun language to write in for that reason.
Yep. I've also long disliked Python for the issues raised in the thread, and a decade ago saw Ruby as an alternative that really made a lot of sense. Couldn't understand why Python was getting all the joy and Ruby wasn't (no English documentation? Pshaw, there's a book!). Then Rails happened and any credibility Ruby had went out the window.

Honestly Python isn't terrible. It has some idiosyncrasies due to its age. Most people live with them, or decide other benefits of the language (and the technically strong, not-complete-hipster community) outweigh the issues it does have.

The horror, though, is Perl 5. And 6 too.

ExcessBLarg!
Sep 1, 2001

BigRedDot posted:

So is the issue just that you, in particular, work with some lovely team that insists on writing descriptors and metaclasses just for the hell of it, and insists on monkey patching everything in sight? Because not everyone, or even most of everyone, or even a small fraction of everyone, does that. It's pretty specious to claim that it is not possible to reason about all python code because of the most dynamic features that almost no python code uses.
He's not wrong though. These features can be convenient at times when writing short scripts and one-offs. But in the context of a large project it's necessary to be much more disciplined about interfaces as the opportunity to inadvertently wreck some code elsewhere is actually a concern.

It's not just Python though, Perl and Ruby are no better in that regard, and probably considerably worse.

ExcessBLarg!
Sep 1, 2001

pseudorandom name posted:

The %G and %g formats to strftime() use the ISO 8601 week numbering year instead of the actual year as used by %Y and %y.
GNU libc's strftime(3) page has a reasonable explanation of "%G" that would caution anyone against using it in favor of "%Y", but coreutil's date(2) has a much better warning:

date(2) posted:

%G year of ISO week number (see %V); normally useful only with %V
Don't use "%V"? Don't use "%G"! Simple as that.

ExcessBLarg!
Sep 1, 2001
Linux? How does the MAINTAINER even accept that poo poo?

Sure, that kind of code is common in SoC vendor trees because holy poo poo they can't write good code. But that's not supposed to make upstream at all.

ExcessBLarg!
Sep 1, 2001

pseudorandom name posted:

Assuming within that script that the directory containing the currently running script exists was reasonable right up until ntfs-3g crashed and that "cd ${0/*}" caused an IO error.
1. That IO error should've been caught with a "set -e".

2. 'rm -rf "$foo"/*' is usually wrong. It won't remove any dot-files in $foo. It also might not remove all non-dot-files in $foo if the number of files exceeds the limit on wildcard expansion. It's safer to do 'rm -rf "$foo"; mkdir "$foo"', unless you intentionally want to leave dot-files behind and the number of files in the directory is otherwise known to be reasonable.

3. Obtaining the absolute path of the current directory, or the directory a script is in, is a difficult thing to do portably. 'readlink -f "$(dirname "$0")"' works well on systems with GNU readlink, but in particular Macs don't support that I think. The Steam script contains a five line comment to explain the non-obvious approach it uses to obtain the absolute path, so sanity checking the result is absolutely prudent.

It's crazy to blame this problem on NTFS. The subshell to obtain the absolute path could poo poo out for any number of reasons (stalled network mount, lack of permissions if ran as different user, process rlimit, etc.) and the script could've, but didn't, employ multiple techniques to mitigate such an event from becoming a bigger problem.

ExcessBLarg! fucked around with this message at 22:57 on Jan 18, 2015

ExcessBLarg!
Sep 1, 2001

qntm posted:

pwd isn't universal?
It is, but if the path to the current working directory contains a symlink, then "pwd" and "readlink -f `pwd`" will return different results. Either result may be sufficient depending on the purpose, and in the Steam case they are, but there are applications that do demand "must resolve to absolute path with no symlinks."

ExcessBLarg!
Sep 1, 2001

GrumpyDoctor posted:

It gives idiots a wrong answer to regurgitate on StackOverflow when someone asks how to kludge multiple dispatch in languages without multiple dispatch.
So what's the right answer? Besides "don't".

ExcessBLarg!
Sep 1, 2001

pigdog posted:

Nope, frameworks these days often act as web servers themselves.
That's a bigger horror to me. Writing secure Internet services is hard, and it took many years for the big guys (Apache, nginx, etc.) to get it right. I can't imagine that WEBrick, Mongrel, Phusion Passenger, Unicorn, Thin, and Puma have all had sufficient longevity to survive as public-facing HTTP daemons on moderately popular sites.

Everyone at least still proxies through nginx or Varnish first, right?

ExcessBLarg!
Sep 1, 2001

Hammerite posted:

I don't mean to pick on you or anything, but I found it funny that this incredibly vague post was in response to someone asking for a "specific example".
It's pretty accurate for being uselessly vague. He's referring to Alan Cox (former tty and Linux 2.2 maintainer), and the story is on his Wikipedia page. And yes, he's working on Fusix right now, which is a Unix for 8-bit micros.

Although the Z80 isn't a particularly weird architecture, it's just old. The 6502 port is a lot more challenging.

ExcessBLarg!
Sep 1, 2001
How does someone find out pg_cmdtuples is deprecated though? Here's the ways:

1. Chance read of documentation and active review of stable code.

2. Language runtime indicates to the user (i.e., writes to a log) that the function is deprecated.

3. Code no longer works with the next major version of the language.

Now, I assume that pg_cmdtuples was deprecated in PHP 5, which would make it appropriate to remove it entirely in PHP 6. Except PHP 6 never happened and the world is effectively stuck with PHP 5. Of course, PHP is a horror itself, but that means #3 is unlikely to happen and that's OK. I don't know if PHP does #2, but is possible the code has always run at a log level higher than what PHP emits warnings of deprecated functions at. #1 appears to be how this was all discovered.

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

Yeah, the whole time they were explaining fork and how elegant it was in the lecture,
Are you actually taking Remzi's/Andrea's class, or at least attending Wisconsin-Madison? Professors like to pretend their poo poo doesn't stink, but if this is actually being used at another respectable institution that's insane.

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

I am at UW, but they are currently exploiting the hell out of grad student lecturers in the CS department.
Sounds like an inbreeding problem.

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

Any OS engineer could be able to tell you why fork() and exec() are bonkers insane.
I disagree. Conceptually fork is a very convenient, and relatively-straight forward (at a high level) approach to spawning processes that affords userspace a lot of flexibility in terms of setting up child processes and establishing communications back with the parent. Obviously the details are a bit hairy, but no less so than alternative mechanisms that are sufficiently flexible. Now, it's true conformant fork(2) has too inflexible semantics in modern kernels, which is why Linux's clone(2) came about. Conceptually though, clone is still based on the idea of, and implements, process forking.

Suspicious Dish posted:

Well, first of all, COW means that the kernel needs to overcommit memory on fork()
It does not, you can run the kernel in "don't overcommit" mode in which case Linux will return ENOMEM if there's insufficient memory for all the read-write pages. Conversely processes can create memory overcommitments through brk and mmap without having to fork either. The reality is that Linux allows memory overcommitment by default as it's useful when memory use is inefficient, which can be for a number of reasons other than COW fork rw pages (e.g., wasted space in thread stacks).

ExcessBLarg! fucked around with this message at 05:17 on Feb 6, 2015

ExcessBLarg!
Sep 1, 2001
Edit: Dupe post, whoops!

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

I would prefer a system where fork(); took a function pointer, the child process inherited all the VM maps from the parent process but were set readonly,
So you'd end up with read-only data segments. That might be tolerable if you can make sure that all subsequent variable modifications are register/stack only. The first thing that comes to mind it breaks is libc errno. Either way, there's an assumption that the goal of the child process to exec, which isn't always the case.

Suspicious Dish posted:

and POSIX specified the mind-blowingly-useless vfork();.
vfork is it's own horror, but that's not really what happened.

Before BSD implemented COW fork, vfork was implemented as an optimization for the common fork-then-exec case to avoid the expense of copying rw pages. Now, if you were chummy with your compiler, you didn't have to immediately exec, but instead could call functions in the child process to set fds, etc., so long as the stack was not modified or unwound above the vfork call--hence, it was still strictly more useful than a spawn (fork-then-immediately-exec). Of course, use of vfork is pretty darn dangerous and not worth the benefit after COW fork came around. It's not clear to me if vfork persisted in its vestigial form in POSIX because vfork-exec was still common in code or because they wanted POSIX to support MMU-less systems. Possibly both.

ExcessBLarg! fucked around with this message at 05:59 on Feb 6, 2015

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

Yeah, I just basically didn't see why fork() is useful by itself,
In my book, fork has three main uses:

1. As a means of task parallelism, particularly I/O parallelism where a process spends lots of time in blocking syscalls. Its use here is mostly archaic with newer/better mechanisms like threads and asynchronous I/O, but was still a common approach through the 90s.

2. As a means of providing process supervision and/or sandboxing. Daemons sometimes run supervisor processes to catch crashes of child processes, or to facilitate privileged operations of sandboxed children. This use is starting to go (partially) away as more powerful init systems are becoming common (launchd, upstart, systemd, etc.). However it's still useful in sandboxing where it's faster to fork off a seed process than it is to create runtime environments from scratch (e.g., Android's zygote).

3. Combined with exec, to provide a simple two-step kernel abstraction to facilitate a crapload number of ways to "spawn" other programs.

LeftistMuslimObama posted:

but why fork()+exec() instead of just something like spawnProcess(<function pointer>,<parameters>)?
There was once a rule of thought that the kernel should provide simple abstractions to enable userspace to do everything that userspace needs to do. There's two reasons for this. First, kernel code is privileged (by definition), while userspace code is generally protected; so it's better to make the kernel code simpler where possible to mitigate the risk of introducing bugs. Granted, we're still largely using monolithic kernels, so this idea isn't taken too seriously. Second, the kernel-userspace (syscall) interface is considered to be very stable and having a long life, so as to remain compatible with decades old programs. Thus, the simpler the syscall interface is, the more likely it won't have to change to facilitate some new need.

Now, there's a lot of details involved in spawning a new program. Some of these, like where you locate the code, stack, and parameters, are defined as part of an ABI that the kernel and userspace agree on. However, other of these, like what files should be open, where should fds point to, what user and process group should the program run under, should any resource limits be set, what signals should be ignored, etc., are conventions of userspace and even change depending on what's spawning what. fork+exec allow all these things to be configured as part of spawning a child program without having to create a massive CreateProcess interface to specify all these things. Another alternative is to spawn children in a paused state, to allow the parent process to configure the children before launching them. My guess is that, that approach may have been considered, but the combination of fork+exec could handle that case without having to introduce a set of "change this behavior about this other process" syscalls.

ExcessBLarg! fucked around with this message at 04:55 on Feb 8, 2015

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

I don't feel like I'm getting the actual nuts-and-bolts of the machine, just a snapshot of the machine itself. The lecture about fork never really branched (hah) out into any discussion of why it was designed that way, what sorts of considerations we might make when designing our own OS, or anything like that.
Generally a graduate-level OS course would go more into the "why" on various topics and involve reading of the papers that originally described these things. In some ways undergraduate CS is all about bootstrapping--getting broad enough knowledge of the discipline to be conversant with just enough depth in various topics so that you can be successful in graduate school or your first job in the field. Unfortunately the quality of CS programs is all over the map and even at respected institutions, the quality of a specific class depends greatly on who is teaching it.

There's also a bias towards Unix in academia. It's a combination of its age, prevalence, academic roots (BSD was a graduate research project), availability of source code, and vendor neutrality. But it's true, it's not the only system out there, an comparing Unix to how VMS did or NT does things can be enlightening.

ExcessBLarg!
Sep 1, 2001

Ender.uNF posted:

You can't malloc in between fork and exec.
You can in a single-threaded program.

So, fork gets a lot more challenging in multithreaded programs. fork+exec still works fine, although some care is needed in the event exec fails. Even in multithreaded programs the restrictions on child process code are the same as those on signal handlers. They're inconvenient, but it's still workable. My recommendation, since you already have threading as a means to achieve parallelism, is to avoid using fork after spawning secondary threads, except for fork+exec.

Part of the problem is that threading support is a total bolt-on in Unix systems. Linux didn't have decent POSIX threads support until Linux 2.6, (and as previously noted) things like the close-on-exec race issue persisted for some time after. Bashing in threading support also disrupted far more APIs and conventions than just fork.

ExcessBLarg!
Sep 1, 2001

nielsm posted:

Isn't the conclusion just the obvious, either you use fork() for multiprocessing, or you use pthreads, but never both.
Yes, that's the obvious conclusion except for the case of fork+exec if you have to run an external program from one of the threads. It's not that hard to do and boiler plate code has been posted a bunch of times throughout this discussion. Still, some folks think we should be using posix_spawn or something else instead, which is valid opinion.

pseudorandom name posted:

And your libraries better tell you whether or not they're multithreaded and the new versions better not suddenly become threaded.
So unless the library is a thread library, it shouldn't ever call pthread_create(3) or clone(2) behind my back. Any API call that does need to prominently state in its documentation that, that's the behavior. Ideally is should also be obvious by the function name alone that it's likely to spawn a thread.

You just don't, do that.

ExcessBLarg!
Sep 1, 2001

Jabor posted:

In sane languages, libraries are capable of using threads completely safely, as an implementation detail that users of the library don't actually have to worry about.
Yes, languages that have green threads, GILs, hide fork, and/or don't otherwise support signal handling or place limitations on signal delivery. Conversely, thread use by libraries in higher level languages may not always be safe either, even if there's a general assumption of safety by programmers.

Problems that come with threading and signals, like genuine concurrency and reentrancy, are systems problems and not an artifact of the language alone. C, which has very little of a runtime doesn't mask these at all and so "foists" the problems on the programmer. Managed languages tend to hide these to the benefit of the programmer, but which otherwise may have consequence in being able to write low level systems code.

ExcessBLarg! fucked around with this message at 18:25 on Feb 9, 2015

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

Python exposes fork, heavily uses it (multiprocessing), also exposes threads, and can use threads behind your back, and also exposes libraries incompatible with fork (ctypes).
CPython also has a Global Interpreter Lock (GIL) which prevents multiple threads from executing Python bytecode concurrently.

ExcessBLarg!
Sep 1, 2001
I completely understand that IBM has customers that may well routinely deal with EDCBIC data, but do they seriously need EDCBIC source support? Is this poo poo not cross-compiled?

ExcessBLarg!
Sep 1, 2001
How do people actually interact with the mainframe? Presumably line printers are gone. Do they use terminal emulators on desktops? Are they reasonably VT-compatible (not sure how EBCDIC actually plays into that)?

So, honestly, I would've imaged that once C language software became popular they would've converted old code using trigraphs to a newer EBCDIC variant that supports C language syntax natively. Surely they must do something like that in order to pull in C code written on different platforms, unless it's a truly closed ecosystem. I get backwards compatibility, but continuing to march forward with trigraphs, which were a stopgap measure to begin with, seems particularly crazy.

ExcessBLarg!
Sep 1, 2001
code:
BITS = BITS && BITS;

ExcessBLarg!
Sep 1, 2001
C was the first language I had learned after programming in Applesoft BASIC (with a bit of 6502 machine language). I had my "holy poo poo" moment after leaning about function pointers.

These days dynamic dispatch is taught early, often in the context of C++/Java-style polymorphism or Python/Ruby-style duck typing. But at the time, such a subtlety powerful mechanism built manually from primitive function pointers was absolutely incredible, and changed the way I thought about programming entirely.

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

Practically everyone in the class got knocked down 20% on the first project because we didn't call free() before ending the program,
Well the good news is that it happened to practically everyone. That should be a good indicator that their instruction is ambiguous. It probably also means that if this continues to happen, the grades will be curved.

So the program should definitely not leak memory between jobs/iterations if it's capable of processing multiple inputs, that's just indefensible. As for whether the program should free all dynamically allocated memory prior to a clean exit is more of a matter of style and/or circumstance, and you'll come across both in practice. Usually I try to write programs (that intend to terminate) so that they can run iteratively if provided with multiple inputs, even if I'm only providing a single input in the common case. As a consequence, I'll end up calling free on data structures allocated for each iteration, which usually results in all memory being opportunistically freed on clean exit. That's good style.

Conversely, if freeing all one-off dynamic allocations on clean exit is cumbersome or requires significant amount of additional code, I wouldn't worry about it as it has little practical benefit.

You're right, you don't have to call free and the OS will reclaim the memory anyways. The best reason to explicitly aim to free all dynamically allocated memory is, as others have mentioned, so that leak checkers like valgrind complain less, making it easier to find true memory leaks. However, keep in mind that there's plenty of ways for your program to terminate abnormally (bail out on error, getting signaled, forked children, etc.) in which it wouldn't make sense to free all outstanding memory allocations. In some cases, like signal handlers and while executing forked children, it may not even be safe to do so. In the abnormal case I generally try to invoke the simplest method of termination possible, including default signal handlers, or calling the GNU error/error_at_line functions which I've come to like on GNU systems.

It's also true that you may, in some circumstances, be dealing with resources that the OS doesn't automatically deallocate for you, in which case you do have to take care to deallocate them in exceptional pathways wherever possible. Sometimes that might not even be safe, and the safest option is to leak resources, but that's a decision that has to be made after a solid analysis of the circumstances. But that's not a problem with malloced userspace memory on modern Unix systems, and so it's simply not worth doing that analysis. The situation is different when writing library code, or kernel code, or writing code for MMU-less systems running tiny OSes, but all of those are completely out of scope for this project.

ExcessBLarg! fucked around with this message at 18:02 on Mar 4, 2015

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

What I'm getting here is that it's wrong of the lecturer to act like there's a firm right way in either direction, since it seems we can't even agree on it in here.
Chances are your lecturer is an academic, and has always worked in academia. In that world there's a tendency to do things the (opinionated) "one true way" neglecting the fact that modern systems are quite a bit different, often for necessary pragmatic reasons, than the ones they worked on as a graduate student or whenever the last time it was that they wrote any significant amount of "important" code.

Occasionally you'll come across a lecturer that went back to academia after 10+ years in the trenches. Those are the ones worth paying attention to.

ExcessBLarg! fucked around with this message at 18:08 on Mar 4, 2015

ExcessBLarg!
Sep 1, 2001

LeftistMuslimObama posted:

If only :(. This guy just finished the first year of his Master's in India and transferred here. I don't think he's even completed a course at this school yet.
Oh man, that's bad. He's going to be coming up a lot more in this thread, I promise.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

ReelBigLizard posted:

Can we post Coder horrors too?
Ambiguous time zones are my biggest pet peeves. It's not any 1/1/1970.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply