Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Riven
Apr 22, 2002

minato posted:

I think you have the names mixed up. Russ Meyer was all about tits. Russ Cox is an rear end.

Lol yep thank you.

Adbot
ADBOT LOVES YOU

waffle enthusiast
Nov 16, 2007



e: very wrong thread.

Pollyanna
Mar 5, 2005

Milk's on them.


How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Pollyanna posted:

How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous.

That's the best part about protobufs - if you do them right, you don't have to. As long as your protobuf only changes in particular ways, servers and clients can keep using the definitions they were originally compiled with and continue to work correctly, even when the thing they're talking to is using updated definitions.

Can you give an example of a situation that you're concerned about?

2nd Rate Poster
Mar 25, 2004

i started a joke

Pollyanna posted:

How are you expected to keep protobuf definitions and compiled files up to date? Just constantly ping a GitHub repo until the file changes or something? That sounds dangerous.

On the tech side -- you have your CI compile them down into a new package that gets pushed on every "release" to github/whatever of your proto definitions. You then bring the compiled proto package into your app like any other dependency, managing versions to get new features like any other dependency.

On the people side -- you don't break your API contracts by making backwards incompatible changes. If your message definitions are changing quickly enough that apps need new fields right away, that type of change is probably a coordinated effort to do stuff with the new fields, and so bringing in updated proto definitions is probably the least worry.

Pollyanna
Mar 5, 2005

Milk's on them.


Jabor posted:

That's the best part about protobufs - if you do them right, you don't have to. As long as your protobuf only changes in particular ways, servers and clients can keep using the definitions they were originally compiled with and continue to work correctly, even when the thing they're talking to is using updated definitions.

Can you give an example of a situation that you're concerned about?

We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary.

asur
Dec 28, 2012

Pollyanna posted:

We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary.

Unless something has changed a non-mapped enum in a protobuf should default to the first position. I think in proto3 they enforced a constraint that it be the first and map to zero, anyway the normal convention was to make that unknown and your application should be able to handle it.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Pollyanna posted:

We have some enums (a list of insurance agencies) that get updated with new entries e.g. a new “FooGroup Inc.” with id 180. Our old protobuf does not have that entry, so it can’t map 180 to the symbol “FooGroup Inc.”, and therefore bugs out because our “hey we can’t translate this symbol to a display string” exception handler logs “can’t translate symbol 180” instead of “can’t translate symbol FooGroup Inc.”. We need to know it’s symbol instead of just an integer so we know what entry we need to add to our translation dictionary.

Generally speaking, if you're adding enum values, you should be able to sensibly handle the case where a client sees an enum value that it doesn't know about. Even if you perfectly solved your current problem, you'd still have a fundamentally similar issue when the client just happens to be out-of-date.

For example, one option might be to lose the enums, and instead just use an opaque key (or perhaps just an integer key) to identify insurance agencies. Then, have your server expose an rpc that returns the information for the insurance agency with the given key. That way users don't need an entirely new client build every time you add an insurance agency, aren't working with outdated information if agency info changes, etc.

--

Your exact problem also seems like more of a process issue. Wouldn't you update your translation dictionary when you add a new insurance agency (i.e. at the same time you add it to the enum)? As opposed to waiting until the first time someone actually uses that particular value, and then has to put up with untranslated strings until you've finished fixing it.

Coffee Mugshot
Jun 26, 2010

by Lowtax
Presumably when using proto, your server and client should be running from the same generated proto definition. Any other combination requires handling a bunch of edge cases. That's not particularly a go problem as much as it is a property of protos.

spiritual bypass
Feb 19, 2008

Grimey Drawer
Can you put the definition into a git submodule?

DARPA
Apr 24, 2005
We know what happens to people who stay in the middle of the road. They get run over.
The issue is from using .proto as a database when it should be defining the structure of your data. Enums first value should be a sane default/unknown entry.

Insurance companies and translations should be defined in a datasource, not hard coded into the protocol.

Imagine DNS requiring a client update eveytime someone registered a new domain?

asur
Dec 28, 2012

Coffee Mugshot posted:

Presumably when using proto, your server and client should be running from the same generated proto definition. Any other combination requires handling a bunch of edge cases. That's not particularly a go problem as much as it is a property of protos.

That's not really a valid assumption to make. To do so you either need to completely control the build and deployment pipeline of everything using the proto definition or delay enabling any features using the new definitions until everything is updated. The former has issues at scale, the latter runs into issues with company size, product ownership, and external partners. My experience is you're better off starting under the assumption that versions may not be the same and enforcing backwards compatiblity and handling of the edge cases, i.e. an unknown entry for enums in this case.

bonds0097
Oct 23, 2010

I would cry but I don't think I can spare the moisture.
Pillbug
The whole point of using schemas is that you can enforce schema migration rules to ensure either backwards or forwards compatibility (or both) and have a mixed ecosystem where producers of data are updated at different rates and use different schema versions and still work correctly.

Enums can often break this very quickly since while you usually can't change the data type of a field as part of a migration, enums end up being an exception despite the fact that adding a new value to an enum is really altering the data structure. Enums suck and shouldn't be hard in a schema-first architecture, in my opinion, except for the rare case of actual static enum structures that do not need to grow or shrink over time.

Coffee Mugshot
Jun 26, 2010

by Lowtax

asur posted:

That's not really a valid assumption to make. To do so you either need to completely control the build and deployment pipeline of everything using the proto definition or delay enabling any features using the new definitions until everything is updated. The former has issues at scale, the latter runs into issues with company size, product ownership, and external partners. My experience is you're better off starting under the assumption that versions may not be the same and enforcing backwards compatiblity and handling of the edge cases, i.e. an unknown entry for enums in this case.

My experience is that you freeze protos and delay enabling any features with new definitions to existing servers. I do really think that feature updates, even small ones, almost necessarily mean you need to rebuild everything. If you want to add new features, add a new server and write a new client that speaks to both to perform magic spells for you. CPU is cheaper than rollbacks imo. I'm arguing against defensive programming of unknown entries, it's awkward code to solve a process problem.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem
For what it's worth, it's not actually that hard to build backwards-compatible apis, all you need to do is put in a tiny little bit of thought at the right time.

I have no earthly idea why you'd go to all the effort to create an entirely different server for every little feature you want to ship, commit to maintaining parallel API surfaces for some amount of time, and then eventually say "gently caress you, update or your app stops working" when you decide it's finally time to turn down something. That's more work than just building your feature in a backwards-compatible way, in order to create a worse user experience!

Literally Elvis
Oct 21, 2013

rt4 posted:

iirc the dep guy is a reject from the Drupal world which is funny because Drupal ignored the one thing PHP got right, dependency management with Composer. Now Go has a Composer-like solution instead of whatever the hell dep was trying to do.

you aren’t recalling correctly

canis minor
May 4, 2011

Is there a way to easily parse a JSON structure in Go? Let's say I've got:

code:
{
    foo: [{
        bar: [{
            qux: ''
        }, {
            quz: ''
        }]
    }, {
        baz: [{
            bax: ''
        }, {
            qur: ''
        }]
    }]
}
Let's assume I want to access foo[0].bar[0].qux - the choices I have is to define:
- a nested struct that will mirror what the json is, so I have to define a custom struct for any json I want to traverse, which I guess becomes tedious quite quickly
- define given object as interface{} and cast it at the point of retrieval - but then traversing json will look like

code:
var jsonResponse interface{}
err = json.Unmarshal(buffer, &jsonResponse)

if err != nil {
	// handle err
}

fmt.Println(jsonResponse.(map[string]interface{})["foo"].([]interface{})[0].(map[string]interface{})["bar"])...
Which is horrendous.

I'm beginning with Go, and I can't find a simple, generic way to achieve what I'm after, with people advocating approach no 1

Mao Zedong Thot
Oct 16, 2008


Check out https://github.com/mitchellh/mapstructure

You still have to traipse your way through an unknown structure, but mapstructure makes it a bit nicer to do.

bonds0097
Oct 23, 2010

I would cry but I don't think I can spare the moisture.
Pillbug

canis minor posted:

Is there a way to easily parse a JSON structure in Go? Let's say I've got:

code:
{
    foo: [{
        bar: [{
            qux: ''
        }, {
            quz: ''
        }]
    }, {
        baz: [{
            bax: ''
        }, {
            qur: ''
        }]
    }]
}
Let's assume I want to access foo[0].bar[0].qux - the choices I have is to define:
- a nested struct that will mirror what the json is, so I have to define a custom struct for any json I want to traverse, which I guess becomes tedious quite quickly
- define given object as interface{} and cast it at the point of retrieval - but then traversing json will look like

code:
var jsonResponse interface{}
err = json.Unmarshal(buffer, &jsonResponse)

if err != nil {
	// handle err
}

fmt.Println(jsonResponse.(map[string]interface{})["foo"].([]interface{})[0].(map[string]interface{})["bar"])...
Which is horrendous.

I'm beginning with Go, and I can't find a simple, generic way to achieve what I'm after, with people advocating approach no 1

As a strong advocate of schema-first design, I would argue for approach 1. Assuming you use an actual schema like Avro or Protobuf, you can easily leverage code generation to turn all that into Go struct types for you.

DARPA
Apr 24, 2005
We know what happens to people who stay in the middle of the road. They get run over.
Most of the time I find the best thing to do with Go is just write the code. No clever tricks or shortcuts. Just write out the structs for your JSON and then you'll have first class objects in go to work with.

That said, https://mholt.github.io/json-to-go/ will generate go structs from a json sample.

canis minor
May 4, 2011

bonds0097 posted:

As a strong advocate of schema-first design, I would argue for approach 1. Assuming you use an actual schema like Avro or Protobuf, you can easily leverage code generation to turn all that into Go struct types for you.

I need to mention that I'm digesting an external API, so schema will differ, depending on the parameters. So, no, I don't think so - at least I've not seen any json schema files I could use.

Mao Zedong Thot posted:

Check out https://github.com/mitchellh/mapstructure

You still have to traipse your way through an unknown structure, but mapstructure makes it a bit nicer to do.

Thank you, but I'm confused as I'll still have to define the struct per https://godoc.org/github.com/mitchellh/mapstructure#Decode - I don't think I can do "decode this json to a nested structure that's defined solely within this json".

DARPA posted:

Most of the time I find the best thing to do with Go is just write the code. No clever tricks or shortcuts. Just write out the structs for your JSON and then you'll have first class objects in go to work with.

That said, https://mholt.github.io/json-to-go/ will generate go structs from a json sample.

Cool! Thank you very much - this makes life much easier.

canis minor fucked around with this message at 18:39 on Nov 5, 2018

ragzilla
Sep 9, 2005
don't ask me, i only work here


canis minor posted:

I need to mention that I'm digesting an external API, so schema will differ, depending on the parameters. So, no, I don't think so - at least I've not seen any json schema files I could use.

With encoding/json you'd handle optional parameters by making them a pointer in the struct and using the 'omitempty' hint so you get a nil for an omitted value.

canis minor
May 4, 2011

ragzilla posted:

With encoding/json you'd handle optional parameters by making them a pointer in the struct and using the 'omitempty' hint so you get a nil for an omitted value.

Yes, I can handle it in different ways - define one big struct to cater for all cases, or define different structs for different scenarios; I just wish I wouldn't need to define the structs at all, but still was able to traverse the given object.

I can understand why it's not possible, but given that I'm a beginner in Go it was worth asking :)

geonetix
Mar 6, 2011


One of the biggest plusses of having to define the structure is that you have some minimal variant of automated validation of input, which turns out more valuable in the long run. It’s a big jump from working in languages that just allow any input and deserialize it without asking questions, though.

Still found myself mimicking the go behavior in other languages afterwards.

Startyde
Apr 19, 2007

come post with us, forever and ever and ever
The c devs at work refuse to use a lib for json handling, the go way is a revolution in comparison. Saves my rear end in arguments and production on the regular.

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
If there's one thing I'd be more bored with than my current work assignments, it's rolling my own JSON parser/struct mapper in C, god drat.

canis minor
May 4, 2011

Yesterday I tried using something that should be simple - which is writing struct into CSV, using https://github.com/gocarina/gocsv library and from the start found 3 errors that prevented me from using it at all, so yay, now I'll be just using standard CSV handling and making that work.

edit: and today I've encountered an issue when double value, retrieved from mongodb using mgo is returned as 0 if I define value as float64, but as correct value when I define it as string.

edit edit: and that's because nonuniform data in mongodb - wheeee

canis minor fucked around with this message at 10:57 on Nov 8, 2018

homercles
Feb 14, 2010

canis minor posted:

I'm beginning with Go, and I can't find a simple, generic way to achieve what I'm after, with people advocating approach no 1

For when you can't be arsed: https://github.com/jmoiron/jsonq

Joda
Apr 24, 2010

When I'm off, I just like to really let go and have fun, y'know?

Fun Shoe
I'm setting up a GraphQL API in Go using reflection to generate the schema when the server starts. However, I'm not sure what the best way to specify descriptions for methods in the schema, as functions do not allow for tags. For fields and types I can specify a description tag.

Go code:
package jas_schema

type TaskSchema struct {
	types     TaskTypes
	queries   TaskQueries
	mutations TaskMutations
}

type TaskTypes struct {
	Task Task `description:"A task type representing a work task"`
}

//Queries for the Task schema
type TaskQueries struct {
}

//Mutations for the task schema
type TaskMutations struct {
}

//Task type representing a work task
type Task struct {
	Id   int64 `description:"Task ID"`
	Name string `description:"Task name"`
}

//Query to get a single task
func (t *TaskQueries) task(Id int64) *Task {
	return nil
}

//Mutation to create a new task
func (t *TaskMutations) createTask(task *Task) *Task {
	return nil
}

//Mutation to change a task
func (t *TaskMutations) updateTask(task *Task) *Task {
	return nil
}
I thought about adding fields to the queries object with the same names as the function but ending in Description or something like that, but it feels real hacky. Also, I have the same problem for the arguments

Joda fucked around with this message at 00:03 on Dec 2, 2018

limaCAT
Dec 22, 2007

il pistone e male
Slippery Tilde
I am trying to unit test something like this:

quote:

func Parser(row string) func(foo) bar) {
return nil
}

A parser that returns a function and I would have liked to test the pointer to the function.
However I found that since 2011 function equaility was forbidden:

quote:

Map and function value comparisons are now disallowed (except for comparison[
with nil) as per the Go 1 plan. Function equality was problematic in some
contexts and map equality compares pointers, not the maps' content.

Save for testing that the returned function actually does what I expect to (which could work since this is only for Advent of Code, and I could just recycle the tests I already done for the function themselves) I wonder what other ideas could be the strategy for testing a function like that in golang: to see that a string parses correctly to the function I need.

(I think in some versions of C you could just compare the pointers but I'm not sure about that, in java I could just implement an enum which somehow calls the function I need and test for the enum key...)

homercles
Feb 14, 2010

Convert it to a number with unsafe.Pointer, or stringise it with fmt.Sprint and compare the strings?

ErIog
Jul 11, 2001

:nsacloud:
I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you?

I also have an unrelated super smarty genius question. I want to generate pretty docs for my Go libraries like the standard library has. When I Google I find lots of stuff about running an HTTP server with godoc, and then I find other stuff that's like give us the path to the GitHub and we'll generate docs on the fly. The libraries in question aren't publicly available on Github. They're internal to my workplace, and they will never in a million years be publicly available. Is there something I can do other than launching the godoc HTTP server and spidering with wget?

JehovahsWetness
Dec 9, 2005

bang that shit retarded

ErIog posted:

I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you?

I also have an unrelated super smarty genius question. I want to generate pretty docs for my Go libraries like the standard library has. When I Google I find lots of stuff about running an HTTP server with godoc, and then I find other stuff that's like give us the path to the GitHub and we'll generate docs on the fly. The libraries in question aren't publicly available on Github. They're internal to my workplace, and they will never in a million years be publicly available. Is there something I can do other than launching the godoc HTTP server and spidering with wget?

Use the -html / -url option and redir to a file?
pre:
➜  ~ godoc -h
usage: godoc package [name ...]
	godoc -http=:6060
...
  -html
    	print HTML in command-line mode
...
  -url string
    	print HTML for named URL
Probably easier to just wget it, honestly.

JehovahsWetness fucked around with this message at 16:04 on Dec 14, 2018

Mine GO BOOM
Apr 18, 2002
If it isn't broken, fix it till it is.
code:
godoc -http=localhost:8080
wget -e robots=off -r -np -N -E -p -k [url]http://localhost:8080/pkg/[/url]
Just run that as a nightly cronjob and you'll have updated static documentation. Then you can rsync it to your internal website.

Mine GO BOOM
Apr 18, 2002
If it isn't broken, fix it till it is.

limaCAT posted:

A parser that returns a function and I would have liked to test the pointer to the function.

Go doesn't want you to be able to verify that one function is equal to another in the sense that the pointers may not be equal, and the function's byte optimized may not be equal, even if both functions are identical. Future versions of Go may optimize it in such a way that even though both functions are the exact same function, they may take advantage of some future CPU/threading/memory optimization in which they cloned your function and thus have two points to two different code chunks, both of which are the exact same code, but stored differently and thus would fail your check.

Think of this weird optimization trick they could do. You have two methods that return a static function that has an expensive if/else branch. A future compiler or runtime could determine that in one situation, returning a version of the function that is optimized for one branch over the other, while in another situation it could return the opposite optimization. Or they could do this with normal methods.

Mao Zedong Thot
Oct 16, 2008


ErIog posted:

I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you?

I also have an unrelated super smarty genius question. I want to generate pretty docs for my Go libraries like the standard library has. When I Google I find lots of stuff about running an HTTP server with godoc, and then I find other stuff that's like give us the path to the GitHub and we'll generate docs on the fly. The libraries in question aren't publicly available on Github. They're internal to my workplace, and they will never in a million years be publicly available. Is there something I can do other than launching the godoc HTTP server and spidering with wget?

You could also just run the godoc server internally :shrug:

limaCAT
Dec 22, 2007

il pistone e male
Slippery Tilde

ErIog posted:

I have a dumb question, but why would you care about the specific pointer to the function? You have tests. Go has idioms for testing you can use to test your stuff. So if those tests are good and they passed, what does testing the pointer get you?

I am testing that the parser is giving out the correct result, which in this case is a pointer to a function.

I agree on both counts that I don't strictly need it and that I can design around it (see parser()()), but I would like to know if any goon already tackled the problem.

InAndOutBrennan
Dec 11, 2008
Hi,

I'm spectacularly wasting my employers money by overengineering something and playing around with []interface{} for the first time. Goal is to have the code accept an arbitrary amount of columns from a SQL query.

The sql package has a function Rows.Scan() that takes ...interface{} as an argument which it then fills up with whatever is in that row.

And now I'm confused af.

numCols = the number of columns in the query, calculated at runtime.

code:
iface := make([]interface{},
    	numCols)

for rows.Next() {
	for i := 0; i < numCols; i++ {iface[i] = &iface[i]} // magic!

	err = rows.Scan(iface...)
	
	// continue doing stuff
}
This works, it eats up whatever I throw at it so far and I get the results I want.

code:
iface := make([]interface{},
    	numCols)

for i := 0; i < numCols; i++ {iface[i] = &iface[i]} // magic moved!!

for rows.Next() {
	err = rows.Scan(iface...)

	// continue doing stuff
}
This however, does not work. It runs but it just keeps the values from the first row, not overwriting them with new values each iteration.

Why? (likely something, something pointers)

Thanks!

:edit messed up indentation

InAndOutBrennan fucked around with this message at 10:27 on Jan 30, 2019

InAndOutBrennan
Dec 11, 2008
Ok,

interface{} are magic.

code:
iface := make([]interface{},
    numCols)
After this,
fmt.Println(iface) gives: [<nil> <nil> <nil> <nil>]
fmt.Println(reflect.TypeOf(iface)) gives: []interface {} - which is expected
fmt.Println(reflect.TypeOf(iface[0 - 3])) gives - <nil> <nil> <nil> <nil>

Why doesn't reflect.TypeOf(iface[0]) give *interface{} (or interface{} for that matter)?

After the first loop of
code:
for i := 0; i < numCols; i++ {iface[i] = &iface[i]}
The corresponding output to the above is:
[0xc4200c1d00 0xc4200c1d10 0xc4200c1d20 0xc4200c1d30] - makes sense, we've set something to an address
[]interface {} - iface is still the same type, good.
*interface {} *interface {} *interface {} *interface {} - and now we have pointers to interface{} in the elements, awesome. Can't remember initializing them though.

Then we do the Scan(iface...) and get:
[1 1 1 1] - Mah values!
[]interface {} - Same type for the slice, good.
int64 int64 int64 int64 - Wut??

Magic. Also the memory adresses are the same all through, they never change. But their type do.

Help!

Adbot
ADBOT LOVES YOU

robostac
Sep 23, 2009
The for loop is setting each element of iface to be the address of itself. At this point iface is a slice of interface objects containing pointers to the addresses of that interface object. row.Scan() then gets one row from the database. It then stores each value in *iface[i] which gets rid of the pointer and leaves you with a slice of values (each one stored in an interface object). The next time round the loop you need to reset all of these pointers, otherwise it doesn't know where to put the result.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply