Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ExcessBLarg!
Sep 1, 2001

nielsm posted:

Congratulations, you just described a broken password hashing system.

pigdog posted:

To clarify, you'd want to use a different, randomly generated salt (stored nearby) for every hash. (And to actually use a hash function rather than concat(),
Before you get another "Congrats!" response, it would be nice if someone explains why it's broken.

"hash(password + salt)" was pretty good in the 90s. However passwords themselves don't contain much entropy and these days hardware can be cheaply acquired to crack passwords "relatively" quickly.

The answer to that is to use, in place of a standard cryptographic hash function, a password-based key derivation function, which have adjustable work factors so as to scale up with CPU performance increases. PBKDF2 and bcrypt are two such, except that both fold under massively-parallel cracking hardware, which is addressed by scrypt and it's adjustable memory factor.

But really, the ultimate answer is that you shouldn't ever roll your own crypto, but use a library that does it right, and can update when what was previously "right" is now considered "horribly wrong only an idiot would do that!"

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

Wheany posted:

I think the point is that if you had a global salt and you saw that several users have the same hash, you would only need to brote force the one hash and you would know the password of any user with the same hash.
It's actually worse than that. With a global salt, you can do (e.g., dictionary) attacks against all users at once. Just check "hash(dictionary_word + salt)" against all the stored hashes.

Furthermore, only one user needs a weak password for you to get in. With per-password salts you don't necessarily know which passwords are weak, so you either have to guess which users might have weak passwords, or otherwise iterate through the entire list of users, wasting time on the stronger ones.

ExcessBLarg!
Sep 1, 2001

bobthecheese posted:

I've seen, a number of times, a password being salted with not only a system-wide salt, but also the username. This adds the security of a good salt, ... I've never been shown evidence as to why this is a bad thing.
So this is where my knowledge of crypto implementation is weak, but the problem with usernames is that they're relatively low entropy. It wouldn't seem implausible that this fact, even if used in conjunction with a high entropy system-wide salt, could be exploited along with a (not yet found?) deficiency in a fast hash algorithm to reduce the search space when brute forcing a set of passwords, compared to the combined size of the search spaces for brute-forcing each password individually.

What's so hard about storing high-entropy salts adjacent to passwords? But I suppose it's a silly question, as anyone who actually knows what they're doing is using a well-vetted library, so all the roll-your-own approaches are going to be amusingly wrong.

ExcessBLarg!
Sep 1, 2001

teamdest posted:

This isn't a scenario where this is even close to a good idea.
Sure there is. Learning is an iterative process. Even "Security 101" must exclude critical implementational details while teaching security processes at a high level so that folks can learn the underlying concepts. Otherwise the topic is pretty impenetrable.

The big mistake in security/crypto is making game-time decisions about which methods to use and/or implementing with insufficient knowledge. But there's no harm in reasoning about it as a mental exercise. However, the actual code should be left to multi-PhD experts.

Edit: There's nothing fundamentally wrong with the description of credentials storage as "hash(password, unique_random_high_entropy_salt)" except that the hash functions traditionally used are computationally "too simple" such that brute force techniques are now cost and time feasible, but replace "hash" with PBKDF2/bcrypt/scrypt and it's the same fundamental idea.

Furthermore, there do exist legacy systems with security implementations prior to 1999, lots of them. At best, folks can gut and replace them as they come across, but there's still knowledge to be had in understanding particular vulnerabilities of any one deficient system.

ExcessBLarg! fucked around with this message at 18:00 on Mar 1, 2013

ExcessBLarg!
Sep 1, 2001

Tei posted:

What is the general opinion here about forcing users some password variety?
It would be prudent to require that a password meet a minimum level of entropy, however the actual measure of entropy should be something other than simple independent heuristics, i.e., not "must contain one upper-case character".

For example, a 64-character random hex string is a perfectly sufficient password. If your password requirements fail to accept that, your requirements are broken.

Essentially, if the password can't be trivially broken with a sophisticated dictionary attack, it's good.

ExcessBLarg!
Sep 1, 2001

bobthecheese posted:

* provide plenty of entropy
System-wide salts have the same amount of entropy as getRandomNumber().

ExcessBLarg!
Sep 1, 2001

Hard NOP Life posted:

I have to point out that just because someone emails your password doesn't mean they are storing it in plain text.
True, however it's a somewhat irrelevant point to me given that once you email me a password, there's numerous third parties who are now storing it in plain text.

ExcessBLarg!
Sep 1, 2001

Glimm posted:

Facebook for Android modifies the VM it runs in on older Android versions to allow it to work even with a 65k method limit Dalvik restricts apps to (65k according to this bug report:
My understand of the report is that the 5 MB LinearAlloc limit is distinct from the the 64k method limit, so it's quite likely that the Facebook app contains far fewer than 64k methods.

ExcessBLarg!
Sep 1, 2001

Zombywuf posted:

So hash on the client.
If the whole point of hashing passwords is so that someone who obtains a database dump can't obtain useful credentials from it, then how does pushing the crypto work onto untrusted clients help at all?

A malicious client will just send you the hash from the DB.

The way password hashing prevents that is by making the process of obtaining useful credential from a database dump computationally infeasible (or at least very difficult). There's no way of doing that, that doesn't also require the authentication agent (server) do something that's computationally "challenging".

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

Because it originally looked like this:

C++ code:
int stack_push (Stack *stack, void *elem);
int stack_pop (Stack *stack, void *elem_out);
This is an amusingly perfect description of what was obviously the original C (or C++ I guess) version. Because it's exactly as useless as the Java one.

ExcessBLarg!
Sep 1, 2001
There's no useful way to get the element out of "pop" (or "stack_pop") since the assignment is local to that method only. It actually makes the data structure completely useless since it's "input only".

In other words:

carry on then posted:

Let me guess, they also think Java is pass-by-reference for objects.

ExcessBLarg!
Sep 1, 2001

Zemyla posted:

FAKE EDIT: Also, I'm pretty sure ptr - 1 isn't even valid Standard C, since it can have undefined behavior.
(ptr-1) is valid for any complete (not void *) pointer type.

ExcessBLarg!
Sep 1, 2001

carry on then posted:

Yeah, but how is the C one just as useless?
Although not obligated, presumably the implementation of the function is:
code:
...
elem_out = elem;
...
And just like in the Java version, the assignemnt to elem_out is meaningless since it doesn't influence any program state outside the "stack_pop" method.

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

Not in C. Perhaps I should have put void **elem_out as the signature instead, but it's the same.
That's exactly the point though. To make "stack_pop" useful in C, you have to pass a double-pointer and assign to the deferenced pointer. Java doesn't have "double-pointers" so there's no way to assign a method reference as they're passed by value.

Java object references aren't like C++ references. Actually they're more like implicit C/C++ pointers. All Java objects are dynamically allocated from the heap (but automatically freed by the garbage collector). The Java "." operator is equivalent to the C(/C++) "->" operator for struct(/class) types. However, there's no "*" or "&" operators, no double-pointers, no pass-by-reference, etc.

There's only two three ways to get an object reference "out" of a method in Java:
  • Return it.
  • (Edit:) Throw it as an exception, assuming a throwable object.
  • Assign it to a field in a different object that's passed into the method, possibly an entirely superfluous single-field container object used to emulate C/C++ double pointers.

Edit:

Suspicious Dish posted:

That wouldn't make any sense.
Agreed, however that's entirely equivalent to what the posted Java code actually does. Hence my original comment.

ExcessBLarg! fucked around with this message at 18:58 on Mar 10, 2013

ExcessBLarg!
Sep 1, 2001
I always observe a moment of pondering and reflection upon invoking "#include <ctype.h>".

It's rarely a good idea, even if there's no choice in the matter.

ExcessBLarg!
Sep 1, 2001

Otto Skorzeny posted:

As an aside, MSVC and ICC seem to be pretty sanguine on lea in general, and often replace general arithmetic muls and adds with it
This may be done partly because lea doesn't set FLAGS as a side-effect, which can be helpful for instruction scheduling.

ExcessBLarg!
Sep 1, 2001

abiogenesis posted:

What do you guys think about R?
As a language R definitely has its warts. Some of which are a product of retaining S+ compatibility, and some of which are an outcome of the R community being composed mostly of statisticians instead of PL folks or even software engineers. On the whole though, it does a quite-good job at what it does, and I'd be quite hard-pressed to find an acceptable alternative for R for the various bits of numeric computing I've had to do.

Off the top of my head, some benefits of R are:

The NA "number" for not available or missing values. Truly indispensible when you have input data with missing samples, trying to match vectors/matrices with different dimensions, or have algorithms (e.g., non-circular convolution filter) for which certain output values are genuinely unknown. AFAIK MATLAB doesn't have this, with some folks using NaN (which also exists in R) as a crude approximation. Most functions treat "NA" values reasonably. Any numeric computation language that lacks an equivalent concept is broken in my book.

Functions are first-class. "lapply" works like "map" in Python/Ruby/functional languages. I write a lot of R code that's of somewhat functional style when it's too complicated to be vectorized (it usually contains vectorized code internally) and beats the poo poo out of for loops and iterators.

And downsides/horrors/WTFs:

The language is really "intended" to be used interactively in a terminal (although readline support makes this far better than MATLAB's terminal mode). R scripts are used with the "Rscript" program, which really just invokes the R runtime in a "batch" mode that's essentially equivalent to running the script in the interactive-shell line-by-line. A consequence of this is that the way code is parsed depends on whether it's in a block scope or (interactive) top-level. For example, if you write code in separate-line-for-brace style, e.g., this snippet:
code:
if (condition)
{
    true-statements
}
else
{
    false-statements
}
will work fine when run within some block scope (function or loop), but will generate a useless error if run in a script top-level:

R runtime posted:

Error: unexpected 'else' in "else"
It's because, since scripts are treated interactively, the true-block in the "if" expression is valid, complete, and executes, before "else" is even parsed. When "else" is parsed, it's standalone. So the solution, is, always, always write your elses as
code:
} else {
which works in any context.

The other downside to R is that it's not awesome with regard to memory management. I'm not aware of a mechanism that allows truly-large data sets (data.frames) to be file backed, so you're limited by those that can fit in RAM. Many operations generate new data.frames/matrices instead of modifying existing ones, so somtimes you'll have to manually delete variables to free memory in the middle of a block, just to avoid running out of RAM or thrashing a lot. Related is the fact that data parsing functions (e.g., read.table) are quite slow due to their flexiblity in being able to automagically parse many formats. These can be sped up quite significantly by declaring column contents and other parsing tricks. Honestly, all this is "fine" and is to be expected when dealing with larger data sets. My complaint is that R doesn't really offer a lot with regard to easy-to-use profiling tools to make these kinds of optimizations easier to implement.

Now, all that said, the reason I use R over MATLAB, aside from it's bias towards statistical computing, is because of (i) CRAN, (ii) it's already packaged in most Linux distributions, and (iii) because it's free, it's trivial to instantiate a large cluster of R-running nodes without having to deal with licensing headaches.

ExcessBLarg! fucked around with this message at 17:27 on Apr 16, 2013

ExcessBLarg!
Sep 1, 2001

Jonnty posted:

All this is pretty much fair enough, but the fact remains that you should always use try/except rather than isinstance.
I don't do Python, but I imagine that "isinstance" is equivalent to Ruby's "instance_of?"/"kind_of?" methods. And yeah, there's three different ways I deal with objects that might look like the right type or not:
  • Assume the object implements an appropraite interface, blindly call it, let the caller deal with a runtime exception.
  • When returning an exception is really unexpected, wrap calls around begin/rescue (try/catch), log in the rescue/catch block, and log at level INFO or WARN depending on how unexpected the situation is. You are using log4* right? Right?
  • If the call can accept objects of different interfaces (usually with different method names) and I have to distinguish between them, use "respond_to?" method to see if the object implements the method name I expect to call.
The only time I have to use "isinstance" equivalents is when I have a method that accepts objects with different interfaces, that actually consist of the same method names with different arguments or some crap. That's pretty darn rare. And even in those cases said method is usually aliased to a longer name that's somewhat unique to the interface, or at least better distinguishes the interfaces to each other.

In the absolute worst case, it's somewhat idiomatic in Ruby to include a "to_type" method for objects that implement the interface for a particular type without inheriting from the type's canonical class. For example, you may implement a data structure with a hash-like interface, but not implement from class Hash. In that case though you'd still implement a "to_hash" method that actually converts it to Hash, usually with some unfortuate memory costs. In that case though, it's still prefereable to replace "instance_of(Hash)" with "respond_to?(:to_hash)", and just call the methods directly as if it were a Hash. But even that "nicer" version is still last resort.

ExcessBLarg!
Sep 1, 2001
Honestly the whole premise is quite strange to me. I regularly deal with long-running computations that take a day or two to finish one "cycle" of.

1. Six hours isn't really that long. If the cycles are crazy expensive or crazy unobtainable (supercomputer) then sure, but in those cases you make sure your poo poo works well in advance of launchin on those platforms. If something fucks up after six hours on a riff-raff machine, whatever, restart, go to bed.

2. I have difficulty coming up with a scenario where a user, when performing a limited-data test would provide the right kind of dict-like object, but when perfoming a full-data run would not. True, user behavior can't always be predicted, but generally users don't do crazy batshit stuff either, and if they do, it's certainly on them.

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

C++ code:
...
    do {
        ...
        success = TRUE;
    } while (FALSE);

    if (!success && ret != NULL) {
        ...
This guy likes Pascal right?

ExcessBLarg!
Sep 1, 2001

shrughes posted:

code:
// Copyright 2010-2012 <employer>, all rights reserved.
I use:
code:
/* Copyright © 2013  $company */
"All rights reserved" is now redundant, although it was quite necessary for works published in the US before 1989. Some folks still prefer to use it for emphasis, as there's an opinion that merely stating copyright alone is ambiguous with regard to the Copyright holder's intention with regard to licensing, although there is no ambiguity in law. Anything beyond that is just for scare.

Pardot posted:

Why? None of our stuff has anything.

tef posted:

I was under the impression that copyright was automatic under the berne convention,
This is true, but the real purpose of putting in a Copyright header is to avoid any ambiguity. This code might be internal today, but a few acquisitions later and there might be a push to opensource even internal tools. That's relatively easy to do when the copyright notice is already there, just insert the license below. However if the header is missing, there may be some question as to where the code originally came from.

Or alternatively, if you license your code to some other entity who (lawfully) integrates it into their internal code. Then they later decide to opensource it, the Copyright makes it clear who owns that piece of code.

Honestly, keeping track of Copyright and licenses is important stuff. Revision control does help to maintain a trail of authorship, but only if repositories are preserved and code doesn't get packaged outside of them. It's also not really that hard.

ExcessBLarg!
Sep 1, 2001

DAT NIGGA HOW posted:

Why should I trust this guy's assessment at all? Maybe if he had an alternative that other experts can either agree is done right, then I'll listen to him.
His post does contain an opinion ("CryptoCat sucks"), but it's backed by facts and verifiable statements that enable other experts to agree with his assessment or refute it.

The presentation of these not-previously-known facts and the originality of the investigation alone make his post constructive. It's not, not constructive just because it may offend some folks.

ExcessBLarg!
Sep 1, 2001

UraniumAnchor posted:

I think the eventual consensus was that the assignment in the non-executing block still causes the interpreter to make "what" a local variable from that point on, instead of a function. So I guess Ruby functions aren't truly first class?
As already stated, Ruby methods are not first-class. However, Ruby lambdas are. Had the first stanza been defined as:
code:
what = labmda do
    return 'the'
end
The result would've been entirely reasonable. Although it prints the string representation of the lambda object, "what.call" would actually have to be used to invoke it.

Thing is, it's pretty rare to define global methods in anything but short scripts and tutorials. If I'm going to write a script that does stuff mostly in the top-level anyways, I'd probably be using lambdas.

ExcessBLarg!
Sep 1, 2001

Hughlander posted:

OpenSSLs license isn't compatible with Rubys at the source level?
Yeah, this is the big one. Ruby cannot "silently" use OpenSSL in the background without license pollution.

Actually, Ruby and OpenSSL is fine, but Ruby scripts that are GPL licensed (without an OpenSSL linking exception), or Ruby scripts that use a GPLed module, cannot be lawfully distributed if they also link (actually, load) OpenSSL. Thus use of the OpenSSL module in Ruby must be very explicit. Amusingly, as Ruby does runtime "linking", the common interpretation of license issues means this program is not lawful to distribute:
code:
require 'openssl'
require 'readline'
Well, at least not in a functional sense with the requisite libraries. I guess it's OK to distribute with useless, but propery licensed stubs.

ExcessBLarg!
Sep 1, 2001

vOv posted:

Is there any way to write a generic min() function in C that isn't terrible?
If your compiler supports the typeof operator and Statement Expressions then you can do something like this:
code:
#define min(a,b) ({typeof(a) _a = (a); typeof(b) _b = (b); _a < _b ? _a : _b;})
But some might consider that to be terrible as well.

ExcessBLarg!
Sep 1, 2001

mjau posted:

code:
int _a = 3;
int _b = min(4, _a);  // = 4
Preprocessor deficiencies is a bottomless pit of despair.

ExcessBLarg!
Sep 1, 2001
autoquit - Automatically quit node.js servers when inactive.

OK, that's cool. But why would I ever want to do that?

Ruben Vermeersch posted:

You should really kill your backend at all times. This forces you to keep state out of it. Keep sessions in Mongo, Redis or memcache. Keep all important state out of the app tier.

That’s the first step towards scaling horizontally.
:stare:

ExcessBLarg!
Sep 1, 2001

Plorkyeran posted:

Does node make it easy to accidentally have persistent state?
Not really.

Another thing that node doesn't make easy is gracefully handling folks who connect to your service just as it's autoquitting. Serving null responses and 504s is kind of rude.

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

So, I don't see the race.
The node module isn't unbinding from the socket and then waiting to ensure all accepted connections are closed. With the way node's event queue works (which I'm not an expert on, but I've seen problems like this in node.js code in the past) it's not clear to me that there isn't a potential a connection to be accepted, then the timeout handler to invoke shutdown, before the countConnection callback is processed. Granted the window is really small, and the likelihood of a connection is miniscule if there hasn't been one for ten minutes.

It's more that he proclaims "you should really kill your backend at all times" and provides utterly dubious reasoning, that serves no real purpose on a production machine.

ExcessBLarg!
Sep 1, 2001

Suspicious Dish posted:

Swing and a miss.
The efficiency claims are dubious, but he's correct in the sense the compiler would've errored due to an else without a previous if.

I would still use braces myself.

ExcessBLarg!
Sep 1, 2001

apseudonym posted:

E:if you do build error handling into an if else chain of doom you are a bad person and should feel bad. Very very bad.
I read it as using "else if" in conjunction with the existing gotos, not in place of the gotos.

Which yeah, would've resulted in a compile error in this particular case. But there's better solutions such that the "else if" commentary wasn't really worth him making.

ExcessBLarg!
Sep 1, 2001

ToxicFrog posted:

...both of which require the sftp subsystem,
Neither scp nor rsync use SFTP.

ExcessBLarg!
Sep 1, 2001

down with slavery posted:

News to me. I guess I've just never seen that because it makes no sense (we are in the coding horrors thread after all) except as a method to piss sysadmins off.
scp and rsync support remote transfers by using ssh (or another argument-compatible transport program) to establish a connection to a remote machine, invoke a remote scp/rsync process, and pass the data across the transport established for the two.

Now, this means that scp/rsync support requires that the user has remote login capability and that those commands are allowed to be executed. Also, the local and remote scp/rsync programs have to be compatible with each other. I don't believe either is standardized, but scp is both and old and simple enough protocol that implementations retain compability with each other, while rsync is a sufficiently complex-but-useful program that everyone uses the same effective implementation.

SFTP is a bit different. It's relies on the SSHv2 concept of a "subsystem", which is a mechanism that allows the remote facility to be called by a general name, instead of relying on a specific binary to be available in PATH. The SFTP protocol itself is an IETF Draft standard with multiple implementations, with OpenSSH's implementation being quite common.

Anyways, SFTP, since it is called as an SSHv2 subsystem (in the absense of being piggy-backed on a completely different transport), is typically explicitly defined as such. Thus, with OpenSSH, you can turn off SFTP support in sshd_config, but it necesarilly depends on sshd itself being available. Furthermore, SFTP is a "relatively recent", optional addition to the SSH protocol suite, so, you may well come across machines whose SSH installations simply don't support it.

Of course, just to make things more complicated, some "scp" programs may internally attempt to use SFTP with the traditional scp remote command as a fallback. It's also possible to transfer files over an ssh connection using non-ssh-specific remote commands (e.g., "ssh user@remote 'cat > ~/dest_file' < ~/src_file"), a mechanism that might be implemented by GUI clients in the event that neither SFTP nor scp are available.

ExcessBLarg!
Sep 1, 2001

pokeyman posted:

is a complete failure because JSON is not a subset of JavaScript.
It's pretty close to being a subset of JS object literal syntax. That's good for familiarity sake. The practical effect of it not being a perfect subset is that you can't guarantee correctness by evaling a JSON string. But folks shouldn't be doing that in the first place.

ExcessBLarg!
Sep 1, 2001

ratbert90 posted:

Am I the coding horror? I thought snprintf was fine for known string sizes. :psyduck:
Only if you don't check the return value of snprintf for truncation or error. Actually I should qualify that a bit since there's not universal agreement on best practice.

Some code only checks snprintf for truncation, assuming that with certain use cases it's impossible to error. Other code checks for both truncation and error, but the only error case being "snprintf(...) == -1)". To me, if you're going to check the result of snprintf at all, it makes the most sense to ensure it's in the range [0, size), since, even if a negative return value other than -1 is implausible/"impossible", it's just easy to check for it too.

Even if I "know" the usage of snprintf can't result on truncation or error on the platform, I'd still do a result check as an assert "just in case".

If you really know the usage of snprintf can't result in truncation or error, or more likely just don't care (i.e., preparing a string for log output) and specifically don't want to check the result, then casting the return value to "(void)" at least states that it's a conscious decision to not check the return value instead of an oversight. Although in this case, you definitely care if the string is truncated when passing it to popen.

ExcessBLarg!
Sep 1, 2001

ratbert90 posted:

while((fgets, buff, sizeof(buff), fp) != NULL); to fill a buffer from the output of popen as well. Is there a cleaner way to do that?
fread shouldn't return short (except error or EOF) so you don't need to call it in a loop. Except, apparently there are cases where fread has returned short as result of libc bugs. Furthermore, fread doesn't need to scan for linebreaks like fgets does.

ExcessBLarg!
Sep 1, 2001

Subjunctive posted:

I feel so old.
C99 has been out for 15 years now. It's even been superseded. It's safe to write C99 now, really.

(I may have had to make that argument recently.)

ExcessBLarg!
Sep 1, 2001

eithedog posted:

A paranoid version of fopen.
Except it doesn't actually open the file. It's more like a Rube Goldberg version of access(2) and has the same TOCTTOU vulnerability, if that's relevant.

Edit: Looks like OpenSSL coding style.

ExcessBLarg!
Sep 1, 2001

Deus Rex posted:

Why do you think a university Computer Science program should waste time teaching version control or unit testing?
A CS degree that focuses exclusively on the science aspect is really a math degree.

Truth is, there's an expectation that a CS degree includes an element of practice (even what I'd call engineering) where some time is spent covering (implementations of CS in modern) computer systems. Otherwise fresh graduates would end up rehashing the past 40 years of computer systems and otherwise be unemployable.

So yeah, I'd expect a fresh CS grad to be aware of the concept of version control, if not be familiar with at least one system and possibly even have some high-level understanding of such a system's implementation. Unit testing is also a very important practice in programming, and big-O is a thing in theoretical computer science. Not being familiar with any of them result in "well, what the hell did you learn?"

Edit: It's like a chemistry major being completely unfamiliar with the operation of a chem lab and not being able to perform a titration.

Adbot
ADBOT LOVES YOU

ExcessBLarg!
Sep 1, 2001

MononcQc posted:

Pen and paper is still superior to a shitload of stuff and it owns.
Typewriters were (are) awesome for being legible and clean, with the benefit of being able to type in the margins if needed.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply