Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
MononcQc
May 29, 2007

I've just recently found out about code using ligatures to do fancy operator representation without requiring unicode in code and I'm loving that poo poo

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

lancemantis posted:

if its just static content then nearlyfreespeech is pretty cheap and they have some other stuff too

https://www.nearlyfreespeech.net

Been using them for most of my websites on the side. If you barely get visits it's very cheap. As your usage (see: visits and bandwidth usage) grows, it gets a bit more expensive, but so far I've stuck with them anyway.

MononcQc
May 29, 2007

I started using vs code on windows because the terminal situation is atrocious there and it's pretty good so far. A modal editing plugin and things are good.

MononcQc
May 29, 2007

eschaton posted:

you know what’s even better?

getting rid of the “modal editing” plug-in

use modeless editing for a few weeks and you’ll be faster because of the lower cognitive load

I started without modal editing. I had to adopt it because I would get severe tendonitis that would keep me from working more than 2 hours a day at ~20 years old. With modal editing I can use a computer for more than 8 hours a day without a problem. I'm never going back, because going back means I can't work anymore.

MononcQc
May 29, 2007

my editor, my terminal, and my browser all have vim keybindings in them. Come to the normal mode for an rear end kicking.

MononcQc
May 29, 2007

Sapozhnik posted:

I use vim to edit un-analyzeable langs like C but any time I use an actual IDE I don't really find myself missing it :shrug:

There are also a bunch of annoying little things about vim, like the fact that it doesn't display what the hell file each numbered buffer holds

"oh, but there's a plugin for that! you just..."

translation: "oh, this product is defective out of the box, but it's cool you can fix it because you obviously don't have any actual work that could better occupy your time"

you can use :ls to get the list of buffers with filenames, but personally I just work with tabs and panes which show the filename nonetheless.

MononcQc
May 29, 2007

Wheany posted:

but yanking and pulling is the same thing?

you bootstrap yourself to moving text around

MononcQc
May 29, 2007

"google uses grpc, it has to be great"
*ignores the tons of middleware and discipline google engineers have to make it work*


grpc is probably fine. I'm just tired of RPC (the sixteenth big son) and "google does it so should we".

MononcQc
May 29, 2007

The thing I find interesting about the use of gRPC at Google is that from their SRE book, the entire set up requires an actively managed cluster, a very smart router dispatching requests to a fully managed kubernetes-like back-end of workers on a one-to-one basis, and so on. In the end it sounds like they're transforming the whole thing into what looks a lot more like AWS' Lambda than your traditional RPC set up; they just used RPC as the building block for it.

Then you have an army of folks going in there and hand-tuning timeouts, because RPCs calling RPCs calling RPCs turn out to have very weird and tricky timeout challenges and semantics when it comes to time limitations and cancellation. Blog posts from google point out a need to respect special conventions with unified internal interfaces that let such values be harmonized across the stack (a first argument 'context' that must be weaved into all functions on the call path).

The unmentioned challenges of getting that kind of RPC architecture to work at their best in practice are kind of scary. Maybe the next RPC will be better.

MononcQc fucked around with this message at 12:13 on Aug 9, 2017

MononcQc
May 29, 2007

Sapozhnik posted:

retry and rollback semantics and atomicity guarantees and whatever don't go away if you simply pretend they don't exist. that's not an argument for or against rpc.
Even raw HTTP has mechanisms built-in for idempotence, cacheability, and error return values that can put the blame on either party for example, because they know theirs is a networked model, not a "let's pretend remote servers are on this machine" kind of deal. It's an argument against RPC because RPC traditionally and currently tends never to address these things even if they have been known to be useful if not necessary for decades.

This is not a pro-HTTP argument; HTTP can and would make for an often fairly lovely mechanism, but HTTP did the distsys poo poo better than most RPC systems do, even if there have been like 3 HTTPs, but over a dozen RPC protocols.

Maybe the 24th RPC iteration will get there. It appears Finagle has a way to mark some failure types as retry-friendly now!

Sapozhnik posted:

deeply chained and branching call stacks sliced across tens of machines seem like they'd have the problems you describe yeah. a higher-level abstraction would be useful there. but hey, it's google and they employ some of the best distributed systems people in the world. i'm sure they have some sort of improvements in mind here.
Deeply-chained and branching call stacks sliced across tens of machines is getting to be the norm with the current microservice trend. Even Male Shoegaze's employer, with their 4 concurrent users, uses 12 microservices. If they were able to accept the "cost of coupling", these weird situations would appear sooner than later. Right now they just have the bad luck of having an even worse model to build on. RPC would be an improvement for them for sure.

Until google have their improvements, I don't really feel it's that great of a model to migrate to use in all the places that don't employ "some of the best distributed systems people in the world" without the full caveats and architecture workarounds google required implementing for any level of non-trivial services.

I mean for gently caress's sake, I went to a conference about reactive applications last year, and while half the presentations were about how Kafka got people to remove so many arrows from their architecture diagrams and replace them by one big Kafka box with a logo on it, at least a quarter of them were just about how to manage all the RPCs and other remote calls that were starting to be blocking and surprised everyone with the big nasty tail latencies when they used microservices. Microservices just make the problem much more apparent.

MononcQc
May 29, 2007

CRIP EATIN BREAD posted:

i keep tcpdump running on every server and have it stream to kinesis for processing.

of course those generate more tcp data so i'm hitting the 500 shard limit so i had to make some API calls to amazon to dynamically stand up more streams for each server.

parsing the data in wireshark requires massive amounts of memory and is slow as dirt, but at least i can prove that it's not my fault when a server throws a 500 error.

you can try tshark for a lighter-weight command-line version of wireshark (it ships with it). You can use the same filters as you would in the GUI, but usually it struggles a lot less at handling huge dumps.

MononcQc
May 29, 2007

CRIP EATIN BREAD posted:

i hope you know i was joking

eeh, I've seen similar approaches before, so it was a stretch. Like people replaying/rewriting packets on the fly so that a staging/pre-production stack gets actual production data coming in. Kinesis was a dumb thing though because iirc they have like 5 qps limits by default and that would just not be possible without paging log files :shrug:

MononcQc
May 29, 2007

Lutha Mahtin posted:

Would you say that "it makes the terribleness of your systems more obvious" is a benefit of micro services? (I am a baby coder who probably won't make decisions that big for years, so don't feel like you need to write an essay here lol)

It's a benefit if you know you can fix it, have the time to do it, and are going to stick with a microservice architecture out of need already.

In a lot of cases, you just have people who want microservices because that's the new cool thing, but for whom a monolith would work really really well for many years. In this case, they're giving themselves distributed systems problems they could avoid by scaling vertically for a long time. OTOH, it's good to keep these problems in mind because they can impact product decisions in major ways so you can avoid them (some things that work locally just aren't possible with large distributed systems) if you know you're gonna get real big.

MononcQc
May 29, 2007

tef posted:

so like

slide 1: people need to get from place to place but take all of these different things
slide 2: what if they could take one thing for every journey, ad-hoc
slide 3: uber logo

slide 1: developers struggle to put software into production, and there is a lot of amazon lock in
slide 2: what if they could use one tool to handle ci, cd and ensure local and production environments run the same, and maybe not use amazon
slide 3: docker logo

you aren't selling investors a technology, you're selling them a market you plan to control

you sell the technology to tech journalists
ride-sharing but for servers (aka SETI at home but for docker poo poo)

slide 1: the cloud is controlled by too few actors, and people have spare computing resources
slide 2: what if you could finance buying a fancier laptop by running containers for a fee during off-hours
slide 3: <toaster logo>
slide 4: surge price and rigs catching on fire

MononcQc
May 29, 2007

get better geo-proximity than any other service. Your customer might even be running your software for you. No lower latencies possible.

MononcQc
May 29, 2007

the true benefit of this product is that it may become more interesting for people than mining bitcoins and can therefore kill the bitcoin economy

MononcQc
May 29, 2007

Arcsech posted:

"the library isn't abandoned, it's finished! what else could it possibly need!" (only ever heard this one from Lisps)

Got this in Erlang. I made a library to handle exponential backoff in timers. There's straight exponential, then some with jitter. Then some state/timer management. Some people were worried that the library was abandoned because it had not had new commits in a few year. How much complexity do you expect to have in a library that multiply some values? Like is there really a need for me to keep touching the thing for 3+ years? How wrong do you have to get it to need continuous maintenance there.

The library's just feature-complete and has had no further bug reports to fix.

MononcQc
May 29, 2007

is that a dynamic scope for traits

MononcQc
May 29, 2007

JewKiller 3000 posted:

actually, xml > csv > json > yaml

CSV is pretty much the worst data format possible. It's even more laxly specified than URI querystrings.

This:
code:


Which is just \r\n\r\n

is technically one of:
  • two single-column rows with empty strings as values
  • a single column-row with an empty string as a value and an empty string for a title/key
  • three single-column rows with empty strings as values
  • two single-column rows with empty strings as values and an empty string for a title/key

which one it is depends on the parser implementation, since even the shittiest CSV RFC allows this ambiguity by design.

MononcQc
May 29, 2007

been playing with tun/tap to write little software that bridges connections and injects failures, delays, netsplits (and asymmetric ones too) here and there and it's being really fun :toot:

Next step is to make it work with cross-VMs on a host (shouldn't be hard, rather than a physical proxy) and then it's gonna be deep packet inspection, thinking of either blocking unsafe protocols, or of spoofing NTP packets to cause clock drift and break systems for fun.

MononcQc
May 29, 2007

hackbunny posted:

last time I did that, after finding out there isn't an official client library for the tun/tap device, I literally copy-pasted the entirety of openvpn in my project and commented out code until it compiled and worked

I'm using a little lib called tunctl that gives an interface to read packets at the ethernet level directly out of the tap device, and then a packet codec for most standard packet formats to operate on them. For the most part I don't need to decode much since a lot of filtering rules can be enabled at a high level, particularly those having to do with just simulating bad lines.

So I just open two of these and write the output of one into the other (after transformations applied) and it works pretty well.

MononcQc
May 29, 2007

fwiw I still think it would be way neater if any refactoring an IDE does could be done from a standalone executable (just give it a cursor or range to operate on). That way you could do that poo poo programatically as well, and not just when clicking things by hand like an idiot.

Like "okay you're moving from X to Y, run this program that auto-upgrades your poo poo for you".

It also means it should work from a command line, which would not be a loss.

MononcQc
May 29, 2007

Shinku ABOOKEN posted:

this is the goal of language servers

made by msft so you know its good: https://github.com/Microsoft/language-server-protocol

yeah, clojure and leiningen have something similar for their CLI tool. My guess is that you can integrate the entire clojure toolchain real easy in a thing like VsCode then.

MononcQc
May 29, 2007

So I'm toying with TUN/TAP devices in FreeBSD along with some custom software in the hopes of writing what is essentially a tiny ethernet firewall that simulates bad network connections.

I've got two working setups:

code:
Making a VM (in virtualbox) fail:

em1 <-+-> tap1 <---> [soft firewall] <---> tap0 <-+-> tap2 <==> VM <==> [driver]
      |                                           |
   bridge1                                     bridge0


Hardware Device (em1 connects to one edge device, and em0 to another one):

em1 <-+-> tap1 <---> [soft firewall] <---> tap0 <-+-> em0
      |                                           |
   bridge1                                     bridge0
This works fine in both cases, where I can inject random packet drops, corruption, delays or blocking, or simulate slow and/or asymmetric networks on failures. The problem I have is I somehow can't manage to make it work with local software, that I really wish I could.

I.e. I'd want to be able to make program A bind to an address on em1, and program B bind to an address on em0, and then get to tell the OS "make the traffic for these go through bridge0 and bridge1", but it appears the OS kind of hard-registers routes for those and always makes them go through the loopback interface, which bypasses my thing. Also all the loving FreeBSD networking resources online are about people trying to set up OpenVPN or something :(

MononcQc
May 29, 2007

hifi posted:

that doesn't make sense but it's been a while since i used freebsd. you can try strace? or ktrace? to check what it's doing. what about stuff like nginx where you specify what ip to listen on?

Yeah that's the thing I'm doing; specifying binding on IPs. But it just goes and sounds like there's this magic loopback interface that if you go local -> local it just plops it on there and doesn't give a gently caress. I'm possibly doing something wrong and I'll have to reboot and try with no external interface at all from the ground up.

MononcQc
May 29, 2007

I am about to sign a contract to get propertesting.com published with pragmatic programmers :toot:

MononcQc
May 29, 2007

has mysql stopped silently truncating text that did not fit the input type?

MononcQc
May 29, 2007

the best swedish collation is fika



gotta import this practice.

MononcQc
May 29, 2007

I'm the stop < 0 || stop < 0

MononcQc
May 29, 2007

you stop googling in a problem area once you've googled enough to know what the results are gonna be and then you're the one who starts giving answers for others to google.

MononcQc
May 29, 2007

Chalks posted:

It's the bad programmers that try to come up with the solution from scratch every time. Re-inventing the wheel without doing any research is not an efficient way to program and it's very rare that you're doing something so novel that nobody else has solved the problem you're tackling, and they've may well have done it in a better way that you would eventually come up with.
I'd give a caveat. One thing I feel is a very healthy attitude is to try and take 15 minutes to say "Can I find the answer to X by myself?" before looking for external help, even if you still do it afterwards as a validation step. It gets you into the habit of experimenting and gathering some minimal amount of data before going out and asking someone for help. The idea is not to stop talking to people, but to depend on them less and become the dependable person.

MononcQc
May 29, 2007

MALE SHOEGAZE posted:

i'm rebuilding my parents website in elixir/phoenix is hoo boy i forgot how much i like the OTP methodology. i'm not good with it yet but it's such a cool concept.

not having a type system is a huge headache though. I'm sure I'll get used to it but I keep forgetting that I can't rely on the compiler to blow up if I compare different types.

Dialyzer and so on, but yeah it's nowhere as strict as an actual type checker.

MononcQc
May 29, 2007

Flat Daddy posted:

on the topic list it also said last post was from mononcqc but I don't see a post from him

yeah I posted but it never appeared. maybe it's for the best though

Adbot
ADBOT LOVES YOU

MononcQc
May 29, 2007

Prototype to do all the loving up early then rewrite

  • Locked thread