Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

People walking away from their desks without locking?

Adbot
ADBOT LOVES YOU

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

beuges posted:

Another possible reason for the single big commit at the end could be to hide actual progress or lack thereof. If you're wasting 80% of your time and not actually being productive, it will show up in your commits

Really, I think it is due to deep rooted insecurity. I have days where I hardly write any code but it doesn't make me feel bad as I trust that my days where I do write code are good enough. I believe in myself and trust myself to deliver good days to outweigh the bad days. It took me 20 years in the workplace to learn this tho, at 25 many people are not there (yet).

Also: being productive (writing code) 20% of the time is fine, likely you spend 10~30% on grasping the usecase by talking to users and PO and so on, 10~30% reading old code to find a good spot to put your new code and another 10~30% in formal and informal meetings.

beuges
Jul 4, 2005
fluffy bunny butterfly broomstick

Keetron posted:

Really, I think it is due to deep rooted insecurity. I have days where I hardly write any code but it doesn't make me feel bad as I trust that my days where I do write code are good enough. I believe in myself and trust myself to deliver good days to outweigh the bad days. It took me 20 years in the workplace to learn this tho, at 25 many people are not there (yet).

Also: being productive (writing code) 20% of the time is fine, likely you spend 10~30% on grasping the usecase by talking to users and PO and so on, 10~30% reading old code to find a good spot to put your new code and another 10~30% in formal and informal meetings.

Oh I agree with this completely. But I have also worked with people who delay committing code because there's no code to commit yet, even after factoring in that you don't spend 8 hours a day in your IDE. With one dev, there would even be periodic deployments into our test environment with stubbed out responses but no actual business logic... in status updates he'd provide imaginary progress or even say that the work was complete and any issues were just bugs that he'd fix, but in reality he'd been working on side gigs instead and hadn't actually done much work at all.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Committing for the sake of committing more frequently tends to wind up with so many garbage commits you don't know what you're doing anymore. I spend a lot of my time debugging and figuring out why something's not working as expected and it's not like I'm going to be writing a lot of code then. I still like the idea that once a codebase reaches a certain size there is more value to be gained from deleting code than adding more and that it takes more engineering skill. Then my coworker submitted a PR yesterday to remove all our vendored go mods that deletes 1.2 million lines of code from our monorepo...

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.

necrobobsledder posted:

Committing for the sake of committing more frequently tends to wind up with so many garbage commits you don't know what you're doing anymore.
You can always just squash your commits, rebase, whatever. There's no reason to NOT Always Be Committing.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I was mostly meaning that you start devolving into a routine of git commit -m 'fdsjkalff djskam,l' and for what real purpose? It'd be better to have an IDE do that kind of nonsense for you if we just care about committing nonsense all the time that nobody besides yourself would read. I mean hell, I can write a script to do a git commit and push every time I save a file, but inevitably I wind up committing something that's Not A Good Idea such as credentials in my config files for local dev only, and making sure that remote repos don't ever contain them again. All rules and "best practices" have exceptions and whatever works works... until it doesn't.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
Do you not have remotes where you can privately and quietly stash in progress work?

Usually, I'll commit wherever some discrete change "works", so if I go off on a wild tangent and break everything I can just toss it and come back to my checkpoint. Later on, I can squash / rebase / etc to turn my nightmare branch into one or more sensible commits that I'm happy to show others.

Steve French
Sep 8, 2003

taqueso posted:

People walking away from their desks without locking?

Doesn't matter that the file is writable by any user in that case:

lifg posted:

It's my last day on the job and I've discovered that five users have their .bash_profile set to -rw-rw-rw-, and it's taking all my willpower to not leave a prank.

Queen Victorian
Feb 21, 2018

I always try to divide my commits into logical chunks, like ‘revise styling’ or ‘add validation functions’ so they show the progression and make it easy to do reverts and cherry picks and whatnot if needed. They’re all squashed and rebased at the end, but it’s nice to have them visible during the process.

The thing about my former coworker’s habit was that his poo poo wouldn’t even show up on the remote until he was ready to put up the merge request. And then when it did, all you could see was the single commit.

Yep I just said former coworker. Last day was yesterday, and today I’m meeting my future coworkers for lunch downtown ahead of this big tech conference that we are attending (they got me a ticket even though I haven’t technically started working for them yet). :yotj:

Macichne Leainig
Jul 26, 2012

by VG
drat! That's cool. My old job was big on "self-improvement" but never sent anyone to tech conferences or any other event like that (despite being a MS shop and MS Build being a great event) though once I did talk my boss into letting me go to a free event called "Denver Dev Day" that had some volunteer speakers which was a fun little event. My favorite speaker was a guy who did RBDMS design and had some decent ideas on how to set up your tables (like every table should have a good PK column, something we violated a lot at my old job much to my chagrin.)

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Volmarias posted:

Do you not have remotes where you can privately and quietly stash in progress work?

Usually, I'll commit wherever some discrete change "works", so if I go off on a wild tangent and break everything I can just toss it and come back to my checkpoint. Later on, I can squash / rebase / etc to turn my nightmare branch into one or more sensible commits that I'm happy to show others.
Never had such remotes anywhere I've worked and a rogue remotely accessible git repo is something I would go find from Nessus scans I've usually run weekly, hunt down to the network port, and murder the person that did it. I could certainly commit locally (and do sometimes depending upon how much my work is exploratory v. known / productive) and never push but the number of times I've said "gosh, I wish I could go back to what I did %d minutes ago" is a handful and those times I've gone into the reflog in like 5 minutes. Note that I normally am writing Terraform or CloudFormation code that if I screwed up would blow away half of production and a lot of scaffolding and process is setup to avoid myself doing just that.

My point is moreso that there's a medium between committing every few minutes that's one line commented out that is really low cognitive load and waiting 2 months to push anything at all that's impossible to review as mentioned earlier.

vonnegutt
Aug 7, 2006
Hobocamp.

Queen Victorian posted:

I always try to divide my commits into logical chunks, like ‘revise styling’ or ‘add validation functions’ so they show the progression and make it easy to do reverts and cherry picks and whatnot if needed. They’re all squashed and rebased at the end, but it’s nice to have them visible during the process.

I try to do this as well, and I find it works best when the work is well-defined and straightforward. Other times, it starts well and then turns into 'wip refactoring' ... 'wip undoing refactoring' ... 'wip try again' ... 'wip ugh' and then several 'fix tests' or 'tidy' commits at the end. I find this is especially the case when doing things like upgrading frameworks or deployments. Not every task can be broken into a orderly progression of commits. Further complicating matters, in order to work out things like CI pipelines, sometimes you need to commit just to trigger some post-commit action.

Still, I totally agree that doing one giant commit for each PR is a nightmare.

JehovahsWetness
Dec 9, 2005

bang that shit retarded

vonnegutt posted:

Still, I totally agree that doing one giant commit for each PR is a nightmare.

I'm kind of curious what everyone's big offender is. This is biggest I found in our "core" project:

code:
git show 85eed2d9a81dcc6112f6b104f3da059257a1d317 --stat | tail -1
 297 files changed, 10955 insertions(+), 11049 deletions(-)

raminasi
Jan 25, 2005

a last drink with no ice

necrobobsledder posted:

I could certainly commit locally (and do sometimes depending upon how much my work is exploratory v. known / productive) and never push but the number of times I've said "gosh, I wish I could go back to what I did %d minutes ago" is a handful and those times I've gone into the reflog in like 5 minutes.

Right, so...commits.

CPColin
Sep 9, 2003

Big ol' smile.
At my current job, I made a four-line change to a single file that included 700 lines of changes to a dozen other files that were necessary to restructure the 0%-test-coverage code into something I could actually unit test. But the winner in my past is probably all the times we came out with a new version of the code generation tool at Experts Exchange and wound up with a hundred thousand changes across a thousand files. (One of my great accomplishments there was moving a bunch of the boilerplate in the generated code to common utility functions, so we wouldn't have to regenerate everything and freak out QA so often.)

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

raminasi posted:

Right, so...commits.
The times I've needed the reflog specifically have been rebases I messed up so I must have committed locally at least to keep the workspace clean, so that's compulsory to make a commit locally. But I don't regularly rebase my garbo commits all the time before I push - squash on merge is there, after all. I try to push something on longer lived projects usually at the end of a day to remind myself of my failures in lifeprogress. A lot of the times we talk about commits as a profession we seem to mean commit and push.

JehovahsWetness posted:

I'm kind of curious what everyone's big offender is. This is biggest I found in our "core" project:
This was our Go mod upgrade that removed all our Go vendor dependencies in a monorepo that I mentioned above

code:
$ git show 29bbecb --stat | tail -1
 3418 files changed, 121 insertions(+), 1735676 deletions(-)

Xarn
Jun 26, 2015

vonnegutt posted:

I try to do this as well, and I find it works best when the work is well-defined and straightforward. Other times, it starts well and then turns into 'wip refactoring' ... 'wip undoing refactoring' ... 'wip try again' ... 'wip ugh' and then several 'fix tests' or 'tidy' commits at the end. I find this is especially the case when doing things like upgrading frameworks or deployments. Not every task can be broken into a orderly progression of commits. Further complicating matters, in order to work out things like CI pipelines, sometimes you need to commit just to trigger some post-commit action.

Still, I totally agree that doing one giant commit for each PR is a nightmare.

I find it exceedingly rare to end up in situation where I cannot make good commits even after I am done with the work.

E.g. right now, I am hacking in a prototype of functionality we will need in September, and as it turns out, it is a lot more involved feature than it seemed at first look. This has led to multiple commits along the lines of "WIP: first scaffolding for Butts feature", and "completely refactoring Butts", but I know I can rebase and combine these into a coherent story at the end. There are also some commits that I will split off into separate independent PR, because they improve the code independently of the Butts feature, and they are complete and well-defined too.

My Rhythmic Crotch
Jan 13, 2011

API chat. Do you maintain a web API? I need to bounce ideas around.

I have one person here who complains extremely loudly about not having a PATCH method on my API and I may not be able to put it off any more. I don't want to implement it for the following reasons:

* I would need a separate code path to support PATCH, where I would basically just do a series of SQL updates, one for each field present in the json data sent to me. This feels really dirty and wrong for some reason.
* I would not be able to do any sanity checking for the data sent. I would just have to blindly trust that the user has sent me something that's not going to further fragment our already feeble database, or cause further issues. A lot of this revolves around our crusty, crunchy old database schema, which I have no control over or ability to improve. For example, null in certain tables has a certain semantic meaning, etc.
* (Edit) I suppose, on a more nuts-and-bolts level, I am not really happy with what I suspect will be necessary to make the json deserialization portion work. In other words, I don't know if a member is null because it wasn't part of the PATCH request, or because the client actually set it to null. I'll have to do some kind of generic deserialization thing which sounds poopy.

I suppose it just feels like adding PATCH support is going to cause some non-zero amount of unknown problems. The guy asking for it swears up and down that, without PATCH support, clients *must* do a GET-POST every time something needs to be updated. That is not accurate... they just need to do a single GET at the beginning, cache the data, and then reuse it for subsequent POSTs.

Sigh

My Rhythmic Crotch fucked around with this message at 22:27 on Aug 21, 2019

Fellatio del Toro
Mar 21, 2009

TBH the answer is going to be highly dependent on the nature of the data and how it's being used. I wouldn't say that "I don't want to GET/POST the whole thing" is a very compelling reason by itself.

Is there some reason why a complete GET/POST is expensive? Is latency a major concern? Are they doing bulk updates of single fields? Is the thing they really want to do something that would be better served by a new API (e.g.: update by query)? Does every other API already implement this as a standard?

Regardless the question isn't "what is the best way to do this?" it's "is it worth spending the time to do this?" and it's not up to you to come up with that justification, and possibly not up to you to make that decision.

My Rhythmic Crotch
Jan 13, 2011

Latency is not a concern. The client application has business logic to update a progress value once per few minutes. The guy reads his apache log and sees all of our clients (several thousand) each doing a GET/POST just to update the progress value every few minutes and decides that is too much overhead.

I'm not sure what you mean by "update by query" though. Could you elaborate?

My Rhythmic Crotch fucked around with this message at 23:18 on Aug 21, 2019

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Based on that description it sounds like patch will just be a server side get+post, because that's the only validation you'll be able to perform. Gee it would be nice to have a database with savepoints so you can lock+patch+validate+patch+validate+commit.


Edit after the previous two posts magically appeared: How is updating a progress value going to fail the type of multi-field validation you mentioned before? What are you really trying to do?

PhantomOfTheCopier fucked around with this message at 23:19 on Aug 21, 2019

My Rhythmic Crotch
Jan 13, 2011

Ahhh.. hah. I had not thought of having the server fetch the entire object, update the one thing, run whatever validation/checks etc, then save. That could work.

Duh, don't know why I didn't think of that.

PhantomOfTheCopier posted:

Edit after the previous two posts magically appeared: How is updating a progress value going to fail the type of multi-field validation you mentioned before? What are you really trying to do?

That's just the simplest use case, and the one that he keeps harping on. There are areas of the database with truly idiotic schema where I have to do data mangling and other shenanigans before committing to the DB. Those more complicated areas are where things would go wrong, because the clients have no knowledge of what shenanigans the API is doing with their data before saving it.

My Rhythmic Crotch fucked around with this message at 23:23 on Aug 21, 2019

Fellatio del Toro
Mar 21, 2009

I wouldn't do anything without some real metrics on how much these operations are actually bogging down your servers. He wants you to make a new API and then rewrite a bunch of client code to optimize a REST call that happens once every few minutes?

My Rhythmic Crotch
Jan 13, 2011

He owns the client code, and would be only too happy to rip out the get/post logic and replace it with patch. Why he is willing to do that, but not cache the data and reuse it each time is beyond me.

It's not bogging down the server, he just ... doesn't like seeing it in the apache logs.

Edit: fun fact, this fellow used to be my boss. This web API is my idea and creation, first thing I did when I got here like 6 years ago. The entire time, it's been a love/hate thing for this guy and my API. Loves to use it and complain about it the entire time. When a developer does something dumb through the API (thought they were on dev but were really on prod or something) he's all ":words: well the API should be smart enough to know and not let them do that!! :words:"

My Rhythmic Crotch fucked around with this message at 23:39 on Aug 21, 2019

spiritual bypass
Feb 19, 2008

Grimey Drawer
How big of a server are you using? Would cutting the request volume in half save money by allowing you to use a smaller server?

Fellatio del Toro
Mar 21, 2009

If you do decide to help him the answer is probably not writing a generic JSON manipulation API but just giving him an Update Progress endpoint

My Rhythmic Crotch
Jan 13, 2011

It's hard to say with 100% certainty off the top of my head, but I'd say there's not much possibility for cost reduction. We have 2 servers per office location and they are both pretty heavily loaded. The API load is only one part of the puzzle, and patch support would be small potatoes compared to the heavy lifting that goes on elsewhere.

Fellatio del Toro posted:

If you do decide to help him the answer is probably not writing a generic JSON manipulation API but just giving him an Update Progress endpoint

I'm starting to think this as well. Or perhaps just a few endpoints to hit any other low hanging fruit.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Progress ping API seems like a good place to start. Remember to code in a few spare sleep(100ms) statements so you can improve performance in the future when they keep complaining. "We found a threading bug in testing, this was the workaround".

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

PhantomOfTheCopier posted:

Progress ping API seems like a good place to start. Remember to code in a few spare sleep(100ms) statements so you can improve performance in the future when they keep complaining. "We found a threading bug in testing, this was the workaround".

That is evil, I like it.

My thought was: If it is every few minutes, a GET/POST or GET/PUT makes most sense to me. This way the client can be sure to always have the latest version of the object that it is updating. But I did not work enough with SQL databases to know if that makes sense in that environment. We are working all NoSQL and there it is a real concern.

AWS made it to easy to implement DynamoDB tables, so we have hundreds of them. Then we have a user asking: "Can I also search on field Y?" Which you cannot really as dynamo is a Key-Value store that commonly has one of the fields (id?) stored as a key. So we implement elasticsearch to get around that and now we have two locations to keep an object up to date in. I must admit that this is all super fast tho but it feels hacky.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

Keetron posted:

AWS made it to easy to implement DynamoDB tables, so we have hundreds of them. Then we have a user asking: "Can I also search on field Y?" Which you cannot really as dynamo is a Key-Value store that commonly has one of the fields (id?) stored as a key. So we implement elasticsearch to get around that and now we have two locations to keep an object up to date in. I must admit that this is all super fast tho but it feels hacky.

That's why secondary indices exist in Dynamo. Fair warning, they'll cost you if you just start throwing them around everywhere. Dynamo's pricing model is designed around offering you a cheap and easy-to-use data store to begin with, and then ratcheting up the price as you need to implement more flexibility.

If you're working with data that's frequently accessed but won't change often (say, "give me a list of all client identifiers and their corresponding names" in an organization that requires signed paperwork to onboard a client), don't be afraid of full table scans. Scan once, cache the results for a while, and let the DB be a rarely-accessed underlying source of truth.

Messyass
Dec 23, 2003

Keetron posted:

So we implement elasticsearch to get around that and now we have two locations to keep an object up to date in. I must admit that this is all super fast tho but it feels hacky.

When working with distributed systems there will always come a point where you have to deal with eventual consistency. Might as well embrace it.

Carbon dioxide
Oct 9, 2012

I worked on a REST api that implemented a bunch of the more unusual vocabulary. PATCH. HEAD.

Part of the API had to deal with uploads/downloads of (large) files.
The HEAD was to get metadata on a file (so the file headers, cached in DB) while GET downloaded the entire file, pulling it from a file system.

Turned out that a bunch of out of the box REST clients, that were used by other teams in the company who basically drag-dropped applications together, had no way to deal with those, and we were forced to build a getHeaders GET method that would just call the HEAD code in the server application.

Keetron
Sep 26, 2008

Check out my enormous testicles in my TFLC log!

Messyass posted:

When working with distributed systems there will always come a point where you have to deal with eventual consistency. Might as well embrace it.

Oh, you have no idea how much I have to deal with this poo poo.


Space Gopher posted:

That's why secondary indices exist in Dynamo. Fair warning, they'll cost you if you just start throwing them around everywhere. Dynamo's pricing model is designed around offering you a cheap and easy-to-use data store to begin with, and then ratcheting up the price as you need to implement more flexibility.
Background: I work for a global corp that has dev teams mostly in the US and some in Europe, I am part of the European bit. This corp went all-in on AWS. We write micro services using Java OpenJDK11, Spring Boot 2 and AWS cloud native stuff. It is pretty cool and recruiters go crazy on Linkedin.
A few months ago, someone, somewhere, took a good look at the invoice Amazon send us every month and had a heart attack. Friendly and not-so friendly emails went out to reduce costs, these mails included excel sheets with the biggest offenders. If not on prod, at least make sure test is not over scaled. Makes sense, right? Then some smart people suggested to basically all but halt test when nobody was working anyway, such as during the night. Did I mention this is a global corp with offices around the globe and the sun basically never sets on it? We in the timezones that are during the night for headquarters, are still tracing bugs in test that stem from under capacity or sleeping systems somewhere down the chain. When this started happening we had a few weeks where we could not deploy anything to test or work against test, hindering a handful of teams full of highly paid developers. Right when I started to threaten to do a daily report on lost productivity expressed in a monetary value, the services were scaled back up.

Lord Of Texas
Dec 26, 2006

Keetron posted:

Oh, you have no idea how much I have to deal with this poo poo.

Background: I work for a global corp that has dev teams mostly in the US and some in Europe, I am part of the European bit. This corp went all-in on AWS. We write micro services using Java OpenJDK11, Spring Boot 2 and AWS cloud native stuff. It is pretty cool and recruiters go crazy on Linkedin.
A few months ago, someone, somewhere, took a good look at the invoice Amazon send us every month and had a heart attack. Friendly and not-so friendly emails went out to reduce costs, these mails included excel sheets with the biggest offenders. If not on prod, at least make sure test is not over scaled. Makes sense, right? Then some smart people suggested to basically all but halt test when nobody was working anyway, such as during the night. Did I mention this is a global corp with offices around the globe and the sun basically never sets on it? We in the timezones that are during the night for headquarters, are still tracing bugs in test that stem from under capacity or sleeping systems somewhere down the chain. When this started happening we had a few weeks where we could not deploy anything to test or work against test, hindering a handful of teams full of highly paid developers. Right when I started to threaten to do a daily report on lost productivity expressed in a monetary value, the services were scaled back up.

Go all in on FaaS and bypass the curfew problem? :shrug: Sounds like you are not that far off if you're on newer JDKs and Boot 2.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Just remember that a get/post can suffer race conditions, whereas patch presumably would prevent that.

Content: Yes, the reason you are a software engineer is to uncover the complexities in the existing code. No, this does not grant you the right to file an 85% refactoring code review using the excuse "I kept finding that my small changes required bigger refactoring". One of the other reasons you are here is to take the complex and make the presentation simple.

Pie Colony
Dec 8, 2006
I AM SUCH A FUCKUP THAT I CAN'T EVEN POST IN AN E/N THREAD I STARTED

PhantomOfTheCopier posted:

Just remember that a get/post can suffer race conditions, whereas patch presumably would prevent that.

What does this even mean?

sunaurus
Feb 13, 2012

Oh great, another bookah.

Pie Colony posted:

What does this even mean?

Client 1 - GET
Client 2 - GET
Client 1 - POST based on GET results
Client 2 - POST based on GET results

Client 2 overwrites Client 1's changes (even if they were completely unrelated to Client 2's changes)

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



sunaurus posted:

Client 1 - GET
Client 2 - GET
Client 1 - POST based on GET results
Client 2 - POST based on GET results

Client 2 overwrites Client 1's changes (even if they were completely unrelated to Client 2's changes)

I don't see how PATCH fixes that in the case of a related or overlapping change unless you're tracking object version, which sounds like it's not possible in this case since the API is glossing over a clusterfuck of a schema

poemdexter
Feb 18, 2005

Hooray Indie Games!

College Slice
An object exists: {name: joe, sex: male, height: tall}
Client 1 wants to change height to short.
Client 2 wants to change sex to female.

Client 1 GET object {name: joe, sex: male, height: tall}
Client 2 GET object {name: joe, sex: male, height: tall}
Client 1 POST object {name: joe, sex: male, height: short}
Client 2 POST object {name: joe, sex: female, height: tall}

Result: {name: joe, sex: female, height: tall}

-- or --

Client 1 PATCH object {height: short}
Client 2 PATCH object {sex: female}

Result: {name: joe, sex: female, height: short}

Adbot
ADBOT LOVES YOU

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.



poemdexter posted:

An object exists: {name: joe, sex: male, height: tall}
Client 1 wants to change height to short.
Client 2 wants to change sex to female.

Client 1 GET object {name: joe, sex: male, height: tall}
Client 2 GET object {name: joe, sex: male, height: tall}
Client 1 POST object {name: joe, sex: male, height: short}
Client 2 POST object {name: joe, sex: female, height: tall}

Result: {name: joe, sex: female, height: tall}

-- or --

Client 1 PATCH object {height: short}
Client 2 PATCH object {sex: female}

Result: {name: joe, sex: female, height: short}

Yeah, that's why I specified related/overlapping. Emphasis on overlapping, I guess, but you could also, say, make an object inconsistent by changing properties that are related. Make a 'car' a 'bicycle' by changing a type but hey what's this engine property doing here, as a bad example.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply