|
minato posted:I think you have the names mixed up. Russ Meyer was all about tits. Russ Cox is an rear end. Lol yep thank you.
|
# ? Sep 9, 2018 23:36 |
|
|
# ? May 14, 2024 14:39 |
|
e: very wrong thread.
|
# ? Sep 9, 2018 23:39 |
|
How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous.
|
# ? Sep 13, 2018 14:36 |
|
Pollyanna posted:How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous. That's the best part about protobufs - if you do them right, you don't have to. As long as your protobuf only changes in particular ways, servers and clients can keep using the definitions they were originally compiled with and continue to work correctly, even when the thing they're talking to is using updated definitions. Can you give an example of a situation that you're concerned about?
|
# ? Sep 13, 2018 14:47 |
|
Pollyanna posted:How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous. On the tech side -- you have your CI compile them down into a new package that gets pushed on every "release" to github/whatever of your proto definitions. You then bring the compiled proto package into your app like any other dependency, managing versions to get new features like any other dependency. On the people side -- you don't break your API contracts by making backwards incompatible changes. If your message definitions are changing quickly enough that apps need new fields right away, that type of change is probably a coordinated effort to do stuff with the new fields, and so bringing in updated proto definitions is probably the least worry.
|
# ? Sep 13, 2018 14:51 |
|
Jabor posted:That's the best part about protobufs - if you do them right, you don't have to. As long as your protobuf only changes in particular ways, servers and clients can keep using the definitions they were originally compiled with and continue to work correctly, even when the thing they're talking to is using updated definitions. We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary.
|
# ? Sep 13, 2018 15:35 |
|
Pollyanna posted:We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary. Unless something has changed a non-mapped enum in a protobuf should default to the first position. I think in proto3 they enforced a constraint that it be the first and map to zero, anyway the normal convention was to make that unknown and your application should be able to handle it.
|
# ? Sep 13, 2018 16:09 |
|
Pollyanna posted:We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary. Generally speaking, if you're adding enum values, you should be able to sensibly handle the case where a client sees an enum value that it doesn't know about. Even if you perfectly solved your current problem, you'd still have a fundamentally similar issue when the client just happens to be out-of-date. For example, one option might be to lose the enums, and instead just use an opaque key (or perhaps just an integer key) to identify insurance agencies. Then, have your server expose an rpc that returns the information for the insurance agency with the given key. That way users don't need an entirely new client build every time you add an insurance agency, aren't working with outdated information if agency info changes, etc. -- Your exact problem also seems like more of a process issue. Wouldn't you update your translation dictionary when you add a new insurance agency (i.e. at the same time you add it to the enum)? As opposed to waiting until the first time someone actually uses that particular value, and then has to put up with untranslated strings until you've finished fixing it.
|
# ? Sep 13, 2018 16:45 |
|
Presumably when using proto, your server and client should be running from the same generated proto definition. Any other combination requires handling a bunch of edge cases. That's not particularly a go problem as much as it is a property of protos.
|
# ? Sep 14, 2018 01:42 |
|
Can you put the definition into a git submodule?
|
# ? Sep 14, 2018 02:30 |
|
The issue is from using .proto as a database when it should be defining the structure of your data. Enums first value should be a sane default/unknown entry. Insurance companies and translations should be defined in a datasource, not hard coded into the protocol. Imagine DNS requiring a client update eveytime someone registered a new domain?
|
# ? Sep 14, 2018 02:48 |
|
Coffee Mugshot posted:Presumably when using proto, your server and client should be running from the same generated proto definition. Any other combination requires handling a bunch of edge cases. That's not particularly a go problem as much as it is a property of protos. That's not really a valid assumption to make. To do so you either need to completely control the build and deployment pipeline of everything using the proto definition or delay enabling any features using the new definitions until everything is updated. The former has issues at scale, the latter runs into issues with company size, product ownership, and external partners. My experience is you're better off starting under the assumption that versions may not be the same and enforcing backwards compatiblity and handling of the edge cases, i.e. an unknown entry for enums in this case.
|
# ? Sep 14, 2018 06:57 |
|
The whole point of using schemas is that you can enforce schema migration rules to ensure either backwards or forwards compatibility (or both) and have a mixed ecosystem where producers of data are updated at different rates and use different schema versions and still work correctly. Enums can often break this very quickly since while you usually can't change the data type of a field as part of a migration, enums end up being an exception despite the fact that adding a new value to an enum is really altering the data structure. Enums suck and shouldn't be hard in a schema-first architecture, in my opinion, except for the rare case of actual static enum structures that do not need to grow or shrink over time.
|
# ? Sep 14, 2018 12:26 |
|
asur posted:That's not really a valid assumption to make. To do so you either need to completely control the build and deployment pipeline of everything using the proto definition or delay enabling any features using the new definitions until everything is updated. The former has issues at scale, the latter runs into issues with company size, product ownership, and external partners. My experience is you're better off starting under the assumption that versions may not be the same and enforcing backwards compatiblity and handling of the edge cases, i.e. an unknown entry for enums in this case. My experience is that you freeze protos and delay enabling any features with new definitions to existing servers. I do really think that feature updates, even small ones, almost necessarily mean you need to rebuild everything. If you want to add new features, add a new server and write a new client that speaks to both to perform magic spells for you. CPU is cheaper than rollbacks imo. I'm arguing against defensive programming of unknown entries, it's awkward code to solve a process problem.
|
# ? Sep 14, 2018 16:01 |
|
For what it's worth, it's not actually that hard to build backwards-compatible apis, all you need to do is put in a tiny little bit of thought at the right time. I have no earthly idea why you'd go to all the effort to create an entirely different server for every little feature you want to ship, commit to maintaining parallel API surfaces for some amount of time, and then eventually say "gently caress you, update or your app stops working" when you decide it's finally time to turn down something. That's more work than just building your feature in a backwards-compatible way, in order to create a worse user experience!
|
# ? Sep 14, 2018 16:20 |
|
rt4 posted:iirc the dep guy is a reject from the Drupal world which is funny because Drupal ignored the one thing PHP got right, dependency management with Composer. Now Go has a Composer-like solution instead of whatever the hell dep was trying to do. you aren’t recalling correctly
|
# ? Sep 15, 2018 19:24 |
|
Is there a way to easily parse a JSON structure in Go? Let's say I've got:code:
- a nested struct that will mirror what the json is, so I have to define a custom struct for any json I want to traverse, which I guess becomes tedious quite quickly - define given object as interface{} and cast it at the point of retrieval - but then traversing json will look like code:
I'm beginning with Go, and I can't find a simple, generic way to achieve what I'm after, with people advocating approach no 1
|
# ? Nov 5, 2018 17:43 |
|
Check out https://github.com/mitchellh/mapstructure You still have to traipse your way through an unknown structure, but mapstructure makes it a bit nicer to do.
|
# ? Nov 5, 2018 17:54 |
|
canis minor posted:Is there a way to easily parse a JSON structure in Go? Let's say I've got: As a strong advocate of schema-first design, I would argue for approach 1. Assuming you use an actual schema like Avro or Protobuf, you can easily leverage code generation to turn all that into Go struct types for you.
|
# ? Nov 5, 2018 18:02 |
|
Most of the time I find the best thing to do with Go is just write the code. No clever tricks or shortcuts. Just write out the structs for your JSON and then you'll have first class objects in go to work with. That said, https://mholt.github.io/json-to-go/ will generate go structs from a json sample.
|
# ? Nov 5, 2018 18:25 |
|
bonds0097 posted:As a strong advocate of schema-first design, I would argue for approach 1. Assuming you use an actual schema like Avro or Protobuf, you can easily leverage code generation to turn all that into Go struct types for you. I need to mention that I'm digesting an external API, so schema will differ, depending on the parameters. So, no, I don't think so - at least I've not seen any json schema files I could use. Mao Zedong Thot posted:Check out https://github.com/mitchellh/mapstructure Thank you, but I'm confused as I'll still have to define the struct per https://godoc.org/github.com/mitchellh/mapstructure#Decode - I don't think I can do "decode this json to a nested structure that's defined solely within this json". DARPA posted:Most of the time I find the best thing to do with Go is just write the code. No clever tricks or shortcuts. Just write out the structs for your JSON and then you'll have first class objects in go to work with. Cool! Thank you very much - this makes life much easier. canis minor fucked around with this message at 18:39 on Nov 5, 2018 |
# ? Nov 5, 2018 18:31 |
|
canis minor posted:I need to mention that I'm digesting an external API, so schema will differ, depending on the parameters. So, no, I don't think so - at least I've not seen any json schema files I could use. With encoding/json you'd handle optional parameters by making them a pointer in the struct and using the 'omitempty' hint so you get a nil for an omitted value.
|
# ? Nov 6, 2018 15:48 |
|
ragzilla posted:With encoding/json you'd handle optional parameters by making them a pointer in the struct and using the 'omitempty' hint so you get a nil for an omitted value. Yes, I can handle it in different ways - define one big struct to cater for all cases, or define different structs for different scenarios; I just wish I wouldn't need to define the structs at all, but still was able to traverse the given object. I can understand why it's not possible, but given that I'm a beginner in Go it was worth asking
|
# ? Nov 7, 2018 00:31 |
|
One of the biggest plusses of having to define the structure is that you have some minimal variant of automated validation of input, which turns out more valuable in the long run. It’s a big jump from working in languages that just allow any input and deserialize it without asking questions, though. Still found myself mimicking the go behavior in other languages afterwards.
|
# ? Nov 7, 2018 09:21 |
|
The c devs at work refuse to use a lib for json handling, the go way is a revolution in comparison. Saves my rear end in arguments and production on the regular.
|
# ? Nov 7, 2018 13:47 |
If there's one thing I'd be more bored with than my current work assignments, it's rolling my own JSON parser/struct mapper in C, god drat.
|
|
# ? Nov 7, 2018 23:25 |
|
Yesterday I tried using something that should be simple - which is writing struct into CSV, using https://github.com/gocarina/gocsv library and from the start found 3 errors that prevented me from using it at all, so yay, now I'll be just using standard CSV handling and making that work. edit: and today I've encountered an issue when double value, retrieved from mongodb using mgo is returned as 0 if I define value as float64, but as correct value when I define it as string. edit edit: and that's because nonuniform data in mongodb - wheeee canis minor fucked around with this message at 10:57 on Nov 8, 2018 |
# ? Nov 7, 2018 23:54 |
|
canis minor posted:I'm beginning with Go, and I can't find a simple, generic way to achieve what I'm after, with people advocating approach no 1 For when you can't be arsed: https://github.com/jmoiron/jsonq
|
# ? Nov 10, 2018 13:45 |
I'm setting up a GraphQL API in Go using reflection to generate the schema when the server starts. However, I'm not sure what the best way to specify descriptions for methods in the schema, as functions do not allow for tags. For fields and types I can specify a description tag.Go code:
Joda fucked around with this message at 00:03 on Dec 2, 2018 |
|
# ? Dec 1, 2018 22:56 |
|
I am trying to unit test something like this:quote:func Parser(row string) func(foo) bar) { A parser that returns a function and I would have liked to test the pointer to the function. However I found that since 2011 function equaility was forbidden: quote:Map and function value comparisons are now disallowed (except for comparison[ Save for testing that the returned function actually does what I expect to (which could work since this is only for Advent of Code, and I could just recycle the tests I already done for the function themselves) I wonder what other ideas could be the strategy for testing a function like that in golang: to see that a string parses correctly to the function I need. (I think in some versions of C you could just compare the pointers but I'm not sure about that, in java I could just implement an enum which somehow calls the function I need and test for the enum key...)
|
# ? Dec 13, 2018 11:09 |
|
Convert it to a number with unsafe.Pointer, or stringise it with fmt.Sprint and compare the strings?
|
# ? Dec 14, 2018 06:00 |
|
I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you? I also have an unrelated super smarty genius question. I want to generate pretty docs for my Go libraries like the standard library has. When I Google I find lots of stuff about running an HTTP server with godoc, and then I find other stuff that's like give us the path to the GitHub and we'll generate docs on the fly. The libraries in question aren't publicly available on Github. They're internal to my workplace, and they will never in a million years be publicly available. Is there something I can do other than launching the godoc HTTP server and spidering with wget?
|
# ? Dec 14, 2018 12:20 |
|
ErIog posted:I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you? Use the -html / -url option and redir to a file? pre:➜ ~ godoc -h usage: godoc package [name ...] godoc -http=:6060 ... -html print HTML in command-line mode ... -url string print HTML for named URL JehovahsWetness fucked around with this message at 16:04 on Dec 14, 2018 |
# ? Dec 14, 2018 16:01 |
|
code:
|
# ? Dec 14, 2018 17:07 |
|
limaCAT posted:A parser that returns a function and I would have liked to test the pointer to the function. Go doesn't want you to be able to verify that one function is equal to another in the sense that the pointers may not be equal, and the function's byte optimized may not be equal, even if both functions are identical. Future versions of Go may optimize it in such a way that even though both functions are the exact same function, they may take advantage of some future CPU/threading/memory optimization in which they cloned your function and thus have two points to two different code chunks, both of which are the exact same code, but stored differently and thus would fail your check. Think of this weird optimization trick they could do. You have two methods that return a static function that has an expensive if/else branch. A future compiler or runtime could determine that in one situation, returning a version of the function that is optimized for one branch over the other, while in another situation it could return the opposite optimization. Or they could do this with normal methods.
|
# ? Dec 14, 2018 17:18 |
|
ErIog posted:I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you? You could also just run the godoc server internally
|
# ? Dec 14, 2018 17:58 |
|
ErIog posted:I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you? I am testing that the parser is giving out the correct result, which in this case is a pointer to a function. I agree on both counts that I don't strictly need it and that I can design around it (see parser()()), but I would like to know if any goon already tackled the problem.
|
# ? Dec 14, 2018 20:51 |
|
Hi, I'm spectacularly wasting my employers money by overengineering something and playing around with []interface{} for the first time. Goal is to have the code accept an arbitrary amount of columns from a SQL query. The sql package has a function Rows.Scan() that takes ...interface{} as an argument which it then fills up with whatever is in that row. And now I'm confused af. numCols = the number of columns in the query, calculated at runtime. code:
code:
Why? (likely something, something pointers) Thanks! :edit messed up indentation InAndOutBrennan fucked around with this message at 10:27 on Jan 30, 2019 |
# ? Jan 30, 2019 10:25 |
|
Ok, interface{} are magic. code:
fmt.Println(iface) gives: [<nil> <nil> <nil> <nil>] fmt.Println(reflect.TypeOf(iface)) gives: []interface {} - which is expected fmt.Println(reflect.TypeOf(iface[0 - 3])) gives - <nil> <nil> <nil> <nil> Why doesn't reflect.TypeOf(iface[0]) give *interface{} (or interface{} for that matter)? After the first loop of code:
[0xc4200c1d00 0xc4200c1d10 0xc4200c1d20 0xc4200c1d30] - makes sense, we've set something to an address []interface {} - iface is still the same type, good. *interface {} *interface {} *interface {} *interface {} - and now we have pointers to interface{} in the elements, awesome. Can't remember initializing them though. Then we do the Scan(iface...) and get: [1 1 1 1] - Mah values! []interface {} - Same type for the slice, good. int64 int64 int64 int64 - Wut?? Magic. Also the memory adresses are the same all through, they never change. But their type do. Help!
|
# ? Jan 30, 2019 12:11 |
|
|
# ? May 14, 2024 14:39 |
|
The for loop is setting each element of iface to be the address of itself. At this point iface is a slice of interface objects containing pointers to the addresses of that interface object. row.Scan() then gets one row from the database. It then stores each value in *iface[i] which gets rid of the pointer and leaves you with a slice of values (each one stored in an interface object). The next time round the loop you need to reset all of these pointers, otherwise it doesn't know where to put the result.
|
# ? Jan 30, 2019 12:58 |