Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

tef posted:

ed is the standard editor

?

Adbot
ADBOT LOVES YOU

cinci zoo sniper
Mar 15, 2013




MALE SHOEGAZE posted:

i use mongo booster and it sucks but not as much as mongo compass which is lol

my coworkers seem to use that as well, but i wont touch it seeing the free version limitations, unless someone spills the beans on that

fritz
Jul 26, 2003

tef posted:

ed is the standard editor

ed is the standard editor is the standard editor joke

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


i'm trying to figure out/understand message queue platforms (like activemq) to see if it would be applicable to this server/client-monitor thing i've wanted to do for work for a while

is there a difference between a topic and a queue or are they semantically the same thing

(ed) i'm having the damnedest time finding any activemq "hello world"s for C# and C++, work proxy is being a cast iron bitch today

Ciaphas fucked around with this message at 19:53 on Aug 8, 2017

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.
hahaha omg I lovelovelove inheritance


"Just inherit this super type and your library will be super simple"

But what about all these abstract methods I don't need? 95% of the class is going to throw NotImplementedExcpetion.

"lmao so what it'll handle all the backend connection stuff so you can DRY"

But this super type is highly opinionated about its dependencies and I don't actually need 90% of them anyway. BTW, this is why I prefer composition over inheritance.

*spit takes* "But inheritance is one of the pillars!!!"

:negative:

This is making me actually hate oop and I never thought that would happen.

Sapozhnik
Jan 2, 2005

Nap Ghost
inheritance is generally considered to have been a mistake

tef thinks message queues are bad. idk i guess i agree but it's not something i've ever had the pleasure of dealing with intimately.

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

having only experienced the terms from the implementer side I believe a queue holds messages until any one consumer consumes it, while a topic sends each message to all consumers that are subscribed to it. the difference being whether one or all consumers get each message.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
good luck implementing exactly-once semantics in any meaningful way though

Luigi Thirty
Apr 30, 2006

Emergency confection port.

terrible programmer status: I just spent 3 hours trying to figure out why ProDOS 2.4.1 crashes on my Apple II emulator

ProDOS 2.4 has hacked-in support for NMOS CPUs, which it detects using an illegal opcode that decodes to INC on a 65C02 and a NOP on a 6502. my CPU emulator printed an error and reset if it encountered an illegal instruction. I added the illegal NOPs to my instruction decoder and now it works lol

well "works" in that it says REQUIRES 64K since I don't emulate any bankswitched memory cards

eschaton
Mar 7, 2007

Don't you just hate when you wind up in a store with people who are in a socioeconomic class that is pretty obviously about two levels lower than your own?

Luigi Thirty posted:

terrible programmer status: I just spent 3 hours trying to figure out why ProDOS 2.4.1 crashes on my Apple II emulator

ProDOS 2.4 has hacked-in support for NMOS CPUs, which it detects using an illegal opcode that decodes to INC on a 65C02 and a NOP on a 6502. my CPU emulator printed an error and reset if it encountered an illegal instruction. I added the illegal NOPs to my instruction decoder and now it works lol

well "works" in that it says REQUIRES 64K since I don't emulate any bankswitched memory cards

aw, you don't emulate a 65C02 or the common behaviors of unused opposes?

also: yet

you don't emulate any bank switched memory cards yet

carry on then
Jul 10, 2010

by VideoGames

(and can't post for 10 years!)

Jabor posted:

good luck implementing exactly-once semantics in any meaningful way though

i'm just going off of http://activemq.apache.org/how-does-a-queue-compare-to-a-topic.html

Arcteryx Anarchist
Sep 15, 2007

Fun Shoe
inheritance isn't a mistake, its just almost always horribly used

like there are cases of horrible poo poo within the java standard library

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

inherit this *points to methodz*

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

Sapozhnik posted:

inheritance is generally considered to have been a mistake

tef thinks message queues are bad. idk i guess i agree but it's not something i've ever had the pleasure of dealing with intimately.

currently loving hate message queues.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





message queues should only be used when you have zero control over the producers (or you give zero fucks about the consumer). if you're inserting a message queue between your own producer and consumers you are a top tier scrublord

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

the talent deficit posted:

message queues should only be used when you have zero control over the producers (or you give zero fucks about the consumer). if you're inserting a message queue between your own producer and consumers you are a top tier scrublord

I'm still trying to figure out a way to say this to coworkers without sounding too belligerent.

Luigi Thirty
Apr 30, 2006

Emergency confection port.

eschaton posted:

aw, you don't emulate a 65C02 or the common behaviors of unused opposes?

also: yet

you don't emulate any bank switched memory cards yet

i will add 65C02 when i add Apple IIe support

i have a LanguageCard class but the softswitches for bankswitching memory aren't hooked up yet

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


the talent deficit posted:

message queues should only be used when you have zero control over the producers (or you give zero fucks about the consumer). if you're inserting a message queue between your own producer and consumers you are a top tier scrublord

speaking as a top tier scrublord what's the alternative if you control (indeed, are writing) both the producer and consumer

i first tried direct sockets and that works but i have been told it is a major no no

Lutha Mahtin
Oct 10, 2010

Your brokebrain sin is absolved...go and shitpost no more!

did you read that thing tef wrote a few weeks ago

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

MALE SHOEGAZE posted:

currently loving hate message queues.

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

DONT THREAD ON ME fucked around with this message at 00:47 on Aug 9, 2017

Mao Zedong Thot
Oct 16, 2008


holy :yikes:

Mao Zedong Thot
Oct 16, 2008


MALE SHOEGAZE posted:

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this was my favorite part btw

hobbesmaster
Jan 28, 2008

Ciaphas posted:

speaking as a top tier scrublord what's the alternative if you control (indeed, are writing) both the producer and consumer

i first tried direct sockets and that works but i have been told it is a major no no

is direct sockets a terrible idea on loop back?

rjmccall
Sep 7, 2007

no worries friend
Fun Shoe

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

a pro post

Shaggar
Apr 26, 2006

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

lol. p-langs.

my homie dhall
Dec 9, 2010

honey, oh please, it's just a machine

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

:popeye:

DONT THREAD ON ME
Oct 1, 2002

by Nyc_Tattoo
Floss Finder

it's insane. our services are divided up based on their business domain, not on their functionality. like: it makes sense to me to have a document service that deals with storing and converting and retrieving documents. it makes sense to have a service that deals with collecting metrics. splitting based on business domain just means that we have a) tons of shared data that needs to be kept in sync between services and b) can't we just enforce these separations on the code level instead of on the architecture level?

Ellie Crabcakes
Feb 1, 2008

Stop emailing my boyfriend Gay Crungus

"We don't want tires on this new car, because tires go flat. Instead, we will develop a complex system of carbon-fiber centipede legs."

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


hobbesmaster posted:

is direct sockets a terrible idea on loop back?

well the clients are windows machines and the process server is a linux box, sorry, i didn't mean they were both on the same machine

tef
May 30, 2004

-> some l-system crap ->

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

bet you things are 'reusable' too

Sapozhnik
Jan 2, 2005

Nap Ghost
well, tef is right there but let's see if i understood him correctly.

Ciaphas posted:

speaking as a top tier scrublord what's the alternative if you control (indeed, are writing) both the producer and consumer

i first tried direct sockets and that works but i have been told it is a major no no

everything ultimately runs over direct sockets, what you're saying is you rolled your own serialization and/or framing protocol. don't do that.

pick a serialization and framing format and make the producer talk directly to the consumer(s). grpc works, probably. it's just protobufs over http2.

have a static config file somewhere with the urls of the things that need to get notified when things happen.

the exact details depend on the exact nature of your problem.

Ciaphas
Nov 20, 2005

> BEWARE, COWARD :ovr:


Sapozhnik posted:

well, tef is right there but let's see if i understood him correctly.


everything ultimately runs over direct sockets, what you're saying is you rolled your own serialization and/or framing protocol. don't do that.

pick a serialization and framing format and make the producer talk directly to the consumer(s). grpc works, probably. it's just protobufs over http2.

have a static config file somewhere with the urls of the things that need to get notified when things happen.

the exact details depend on the exact nature of your problem.

ah didn't realize Lutha Mathin was talking to me otherwise 'id have gone looking for the posts

thanks for the info, grpc gives me much more to look at and learn

tef
May 30, 2004

-> some l-system crap ->

Ciaphas posted:

speaking as a top tier scrublord what's the alternative if you control (indeed, are writing) both the producer and consumer

i first tried direct sockets and that works but i have been told it is a major no no

no-one ever got fired for using grpc

tef
May 30, 2004

-> some l-system crap ->

Ciaphas posted:

speaking as a top tier scrublord what's the alternative if you control (indeed, are writing) both the producer and consumer

i first tried direct sockets and that works but i have been told it is a major no no

it literally depends on the protocol

the talent deficit posted:

message queues should only be used when you have zero control over the producers (or you give zero fucks about the consumer). if you're inserting a message queue between your own producer and consumers you are a top tier scrublord

aka 'pubsub is about isolating, not integrating' sure i wrote something about this in this thread already

MononcQc
May 29, 2007

"google uses grpc, it has to be great"
*ignores the tons of middleware and discipline google engineers have to make it work*


grpc is probably fine. I'm just tired of RPC (the sixteenth big son) and "google does it so should we".

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





grpc is a garbage implementation of an ok idea

istrio or finagle is what you actually want

Sapozhnik
Jan 2, 2005

Nap Ghost
i mean rpc is just a catch-all term for any protocol where one party sends requests for which the other party always generates one response. it's too general a concept to be either good or bad in its own right.

rpc with some sort of stateful context to the conversation is bad (see: corba, dcom, etc). whatever crazy conway's law bullshit shoegaze's company has going on is bad.

Finster Dexter
Oct 20, 2014

Beyond is Finster's mad vision of Earth transformed.

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

wow... and I thought I had it bad...

tef
May 30, 2004

-> some l-system crap ->

Sapozhnik posted:

i mean rpc is just a catch-all term for any protocol where one party sends requests for which the other party always generates one response. it's too general a concept to be either good or bad in its own right.

rpc with some sort of stateful context to the conversation is bad (see: corba, dcom, etc). whatever crazy conway's law bullshit shoegaze's company has going on is bad.

i dunno sometimes stateful contexts can work if you're using like a session id to represent it, like within a cookie

Adbot
ADBOT LOVES YOU

tef
May 30, 2004

-> some l-system crap ->

MALE SHOEGAZE posted:

at my startup our architecture is microservices and a message queue.

each microservice has its own mongo database, because this reduces coupling. allowing the services to access the same database would increase coupling, so if one service needs access to data in another service, the originating service will just dump the entire collection into the pipeline, and the consuming service will write out all of the entries into its own database.

whenever an entity in the database is updated, the responsible service will emit an update event, and dump the entire object into the pipeline. consuming services will then write it to their own db, taking extreme care to update any and all data associations (a traditional DB would of course enforce these relationships for you, but it's no loss because keeping data in sync is a totally trivial problem compared to coupling, which is the worst problem).

the architects of this system did not concern themselves with concurrency, because data races are trivial compared to coupling. we've simply forced each consumer to run on a single thread, because concurrency issues are difficult to solve and we have more important problems such as reducing the amount of coupling in our system.

naturally, this system contains json entities that can be over 1mb compressed. if a user updates a single field on one of these entities, the entire 1mb model will get dumped into the queue. if they update the model twice (or 100 times) it will get dumped into the queue each time. this in no way overwhelms the system.

a few months back, i introduced an RPC mechanism so that we could at least make synchronous calls between services in response to user events. today my lead informed me that we're going to deprecate the RPC system because it increases coupling.

this is how you architect a system with 12 microservices that cannot handle more than 4 or 5 concurrent users before falling over. Fortunately, since everything is so decoupled, the system at least maintains the appearance of working.

needs more containers and to be ported to kubernetes by next month tia

  • Locked thread