Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KillHour
Oct 28, 2007


So full disclosure - I work for a NoSQL document / multi-modal database company, so I'm both biased and have a very skewed set of competencies. I really don't have much experience at all with BigQuery, but many databases can handle semi-structured data. The question is if they can index that data - if you're going to be dealing with these fields over and over, you don't want to have to extract the entire column as a string, parse the JSON into a tree and then navigate that structure for every record. That would be horrifically inefficient. You want something that can index the actual values contained within the column individually and then work off of those indexes as much as possible.

As an example, the database I work with doesn't even have columns - everything is JSON documents and the entire document is indexed, with more specialized indexes for user-defined properties to speed up certain kinds of query.

Adbot
ADBOT LOVES YOU

CarForumPoster
Jun 26, 2013

⚡POWER⚡

KillHour posted:

So full disclosure - I work for a NoSQL document / multi-modal database company, so I'm both biased and have a very skewed set of competencies. I really don't have much experience at all with BigQuery, but many databases can handle semi-structured data. The question is if they can index that data - if you're going to be dealing with these fields over and over, you don't want to have to extract the entire column as a string, parse the JSON into a tree and then navigate that structure for every record. That would be horrifically inefficient. You want something that can index the actual values contained within the column individually and then work off of those indexes as much as possible.

As an example, the database I work with doesn't even have columns - everything is JSON documents and the entire document is indexed, with more specialized indexes for user-defined properties to speed up certain kinds of query.

I don't have much preference for relational databases, its just what I've used so far. I obv don't want to be storing my JSON or nested data as a string, but I'm not considering doing that. My requirements for this project are costs are reasonable (<$500/mo) and my use cases can be implemented in 3-4 weeks for one engineer. So yes they'll need to index the values, and it appears BQ does, but I also need to be able to search those indexed values in a way that gives me the insights I need and that's the thing holding me back from looking into document dbs meaningfully more.

Sirocco
Jan 27, 2009

HEY DIARY! HA HA HA!

RPATDO_LAMD posted:

You're pretty light on details here, and the exact issue is probably somewhere in the details.
Render-to-texture should work out just the same as rendering to a framebuffer. Does that single small sprite show up as the correct size etc if you render it to the main framebuffer instead of the texture?

Yeah, I was hoping someone would say "oh you probably did X". Yes, when I draw anything to the main framebuffer's texture it displays correctly. I'll try and put the salient code here in some sort of sensible order.

Initialising the two framebuffers, one for the main texture everything gets rendered to and the alternate one for the sorts of effects I'm trying to make.

code:
OpenGLInitialiseGlobalHandles(open_gl *OpenGL, rectangle2i DrawSpace,
                              s32 Width, s32 Height)
{
    if(!GlobalFramebufferHandle)
    {
        glGenFramebuffers(1, &GlobalFramebufferHandle);
    }

    OpenGLBindFramebuffer(DrawSpace, Width, Height, GlobalFramebufferHandle);

    if(!GlobalFramebufferTexture)
    {
        glGenTextures(1, &GlobalFramebufferTexture);
    }    
    glBindTexture(GL_TEXTURE_2D, GlobalFramebufferTexture);
    glTexImage2D(GL_TEXTURE_2D, 0, OpenGL->DefaultInternalTextureFormat,
                 Width, Height, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, 0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D,
                           GlobalFramebufferTexture, 0);
    glBindTexture(GL_TEXTURE_2D, 0);

    if(!GlobalAltFramebufferHandle)
    {
        glGenFramebuffers(1, &GlobalAltFramebufferHandle);
    }
    
    OpenGLBindFramebuffer(DrawSpace, Width, Height, GlobalAltFramebufferHandle);

    if(!GlobalAltTexture)
    {
        glGenTextures(1, &GlobalAltTexture);
    }
    glActiveTexture(GL_TEXTURE1);
    glBindTexture(GL_TEXTURE_2D, GlobalAltTexture);
    glTexImage2D(GL_TEXTURE_2D, 0, OpenGL->DefaultInternalTextureFormat,
                 Width, Height, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, 0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D,
                           GlobalAltTexture, 0);
    glBindTexture(GL_TEXTURE_2D, 0);
    glActiveTexture(GL_TEXTURE0);
        
    GlobalsInitialised = true;
}
This code runs through all the sprite rendering requests made by the game, runs a shader on it to make it render to the right place on the screen, checks some stuff encoded into the ID to see if which texture the sprite should be rendered to and then renders it.

code:
                            post_processing_effect DefaultEffect = {PPEffect_DefaultTexturedQuad};
                            OpenGLBeginProgram(OpenGL, DefaultEffect);

                            for(quad_data_chunk *Chunk = ZBucket->FirstChunk;
                                Chunk;
                                Chunk = Chunk->NextInRenderQueue)
                            {
                                opengl_quad_chunk_metadata MetaData = GetChunkMetaDataFromID(Chunk->ID);

                                if(MetaData.UsesAltTexture)
                                {
                                    glBindFramebuffer(GL_FRAMEBUFFER, GlobalAltFramebufferHandle);
                                    glActiveTexture(GL_TEXTURE1);   
                                }

                                glBindTexture(GL_TEXTURE_2D, MetaData.TextureHandle);
                                glDrawArrays(GL_QUADS, Chunk->VertexIndex, Chunk->VertexCount);
                                if(MetaData.UsesAltTexture)
                                {
                                    glBindFramebuffer(GL_FRAMEBUFFER, GlobalFramebufferHandle);
                                    glActiveTexture(GL_TEXTURE0);
                                }
                            }

                            OpenGLEndProgram(DefaultEffect);
                        } break;
The default shader code for a regular sprite:

code:
    char *StandardVertexCode = R"GLSL(

in v2 VertP;
in v2 VertUV;
in v4 VertColour;
smooth out v2 FragUV;
smooth out v4 FragColour;

void main()
{
v4 P = {VertP.x, VertP.y, 0, 1};
gl_Position = gl_ModelViewProjectionMatrix * P;
FragUV = VertUV;
FragColour = VertColour;
}
    )GLSL";

    char *StandardFragmentCode = R"GLSL(

uniform sampler2D TextureSampler;
smooth in v2 FragUV;
smooth in v4 FragColour;
smooth out v4 Colour;

void main()
{
v4 TestColour = FragColour * texture(TextureSampler, FragUV);

Colour = TestColour;
}
    )GLSL";
Shader initialisation (for this program, the OpenGLBeginProgram function just assigns the sampler to the default texture unit and OpenGLEndProgram literally does nothing):

code:
  case PPEffect_DefaultTexturedQuad:
        {
            char *VertexCode = StandardVertexCode;
            char *FragmentCode = StandardFragmentCode;

            GLProgram->ProgramID = OpenGLCreateProgram(Defines, HeaderCode, VertexCode, FragmentCode);
            GLProgram->TextureSamplerID = glGetUniformLocation(GLProgram->ProgramID, "TextureSampler");
            OpenGL->VertPID = glGetAttribLocation(GLProgram->ProgramID, "VertP");
            OpenGL->VertUVID = glGetAttribLocation(GLProgram->ProgramID, "VertUV");
            OpenGL->VertColourID = glGetAttribLocation(GLProgram->ProgramID, "VertColour");
        } break;
After everything's been rendered to the texture, this code runs draws it to the screen and allows me to run shaders on the whole image for post-processing effects:

code:
    glBindFramebuffer(GL_READ_FRAMEBUFFER, GlobalFramebufferHandle);
    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
    glBindTexture(GL_TEXTURE_2D, GlobalFramebufferTexture);

    if(Effect.Type)
    {
        OpenGLBeginProgram(OpenGL, Effect);
    }
#if 1
    glBegin(GL_QUADS);
    glTexCoord2i(0, 0);
    glVertex2i(DrawSpace.Min.X, DrawSpace.Min.Y);
    glTexCoord2i(0, 1);
    glVertex2i(DrawSpace.Min.X, DrawSpace.Max.Y);
    glTexCoord2i(1, 1);
    glVertex2i(DrawSpace.Max.X, DrawSpace.Max.Y);
    glTexCoord2i(1, 0);
    glVertex2i(DrawSpace.Max.X, DrawSpace.Min.Y);
    glEnd();
    
    glBindTexture(GL_TEXTURE_2D, 0);
#endif

    if(Effect.Type)
    {
        OpenGLEndProgram(Effect);
    }
The shader I'm running to try and get my transition effect here plus a slightly different vertex shader since it's across the whole screen rather than co-ordinates in the vertex array:

code:
    char *FixedPipelineVertexCode = R"GLSL(

smooth out v2 FragUV;

void main()
{
v2 A = V2(0.5, 0.5);
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
FragUV = (gl_Position.xy * A) + A;
}

)GLSL";
        case PPEffect_ScreenTransition_MoveDown:
        {
            char *VertexCode = FixedPipelineVertexCode;
            char *FragmentCode = R"GLSL(
uniform sampler2D TextureSamplerGame;
uniform sampler2D TextureSamplerCutscene;
uniform f32 Start;
uniform f32 End;
uniform f32 t;
smooth in v2 FragUV;
smooth out v4 Colour;

void main()
{
v4 TestColour = V4(1, 1, 1, 1);

f32 Normalisedt = Clamp01MapToRange(End, t, Start);

if(FragUV.y <= Normalisedt)
{
TestColour = texture(TextureSamplerGame, FragUV + V2(0, 1 - Normalisedt));
}
else
{
TestColour = texture(TextureSamplerCutscene, FragUV - V2(0, Normalisedt));
}

Colour = TestColour;
}
     )GLSL";

            GLProgram->ProgramID = OpenGLCreateProgram(Defines, HeaderCode, VertexCode, FragmentCode);
            GLProgram->TextureSamplerID = glGetUniformLocation(GLProgram->ProgramID, "TextureSamplerGame");
            GLProgram->TextureSamplerID_2 = glGetUniformLocation(GLProgram->ProgramID, "TextureSamplerCutscene");
            GLProgram->t = glGetUniformLocation(GLProgram->ProgramID, "t");
            GLProgram->Start = glGetUniformLocation(GLProgram->ProgramID, "Start");
            GLProgram->End = glGetUniformLocation(GLProgram->ProgramID, "End");
        } break;
OpenGLBeginProgram does what you'd expect for this:

code:
        case PPEffect_ScreenTransition_MoveUp:
        case PPEffect_ScreenTransition_MoveDown:
        case PPEffect_ScreenTransition_MoveLeft:
        case PPEffect_ScreenTransition_MoveRight:
        {
            glUniform1i(OpenGL->Programs[Effect.Type].TextureSamplerID, 0);
            glUniform1i(OpenGL->Programs[Effect.Type].TextureSamplerID_2, 1);
            glUniform1f(OpenGL->Programs[Effect.Type].t, Effect.t);
            glUniform1f(OpenGL->Programs[Effect.Type].Start, Effect.Start);
            glUniform1f(OpenGL->Programs[Effect.Type].End, Effect.End);

            glClearColor(0.0f, 0.0f, 0, 1);
            glClear(GL_COLOR_BUFFER_BIT);
        } break;
I get the transition effect but my OpenGL-fu isn't strong enough to understand why any sprite I'm rendering to GlobalAltTexture is filling up the whole thing. It may be that I'm completely mis-conceptualising how OpenGL operates in someway, I find the way it's put together to be very strange and unintuitive but I've never hit a brick wall like I have with this problem. I've inspected the data being passed to it, (vertex co-ordinates and UV co-ordinates) and they don't seem any different to what's being passed to the main texture.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
I was pingponging between here and the oldie thread about my trials and tribulations with note taking applications. Boost Note hosed me up but I decided for work that I wanted some OneNote-like thing I could use on my Linux setup. To that end, I started playing with Joplin. It's also using its own little database thing. Has anybody ever been screwed by it? I also wonder if anybody has tried it between their phone and syncing with personal hosting or something. I'd be really interested in making shopping lists and syncing them to my phone (edit: which is not for work, but I would expand how I use it if it's actually robust).

Rocko Bonaparte fucked around with this message at 18:30 on Jun 7, 2022

Spatial
Nov 15, 2007

Sirocco posted:

Yeah, I was hoping someone would say "oh you probably did X". Yes, when I draw anything to the main framebuffer's texture it displays correctly. I'll try and put the salient code here in some sort of sensible order.
Don't have time to go into any depth of analysis right now, but distorted drawing like that is usually down to the viewport/projection being incorrect or simply using the wrong texture handle. It sounds like accidentally using the sprite's texture when drawing the framebuffer would have that stretching effect.

If you haven't already, RenderDoc is an extremely useful debug tool. It allows you to step through all the rendering steps in a frame and check the parameters and visually check the textures used, showing the intermediate results all the way. I found it invaluable when experimenting with the stencil buffer. :v:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
My org likes using Postman but the licensing costs are insane, and we don't really need the collaboration and automation features. Are there alternatives to Postman that are worth considering? Ideally GUI based for ease of use across different teams, not everybody involved is comfortable with CLI.

Sab669
Sep 24, 2009

What's wrong with just using the Free version if you're not using the collab tools? If you're just hitting your own APIs that should be fine

Kuule hain nussivan
Nov 27, 2008

fletcher posted:

My org likes using Postman but the licensing costs are insane, and we don't really need the collaboration and automation features. Are there alternatives to Postman that are worth considering? Ideally GUI based for ease of use across different teams, not everybody involved is comfortable with CLI.

I quite like Insomnia.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Sab669 posted:

What's wrong with just using the Free version if you're not using the collab tools? If you're just hitting your own APIs that should be fine

Yup we're just hitting our own APIs. The free version has been meeting all of our needs, so that option is on the table. It's easy enough to stick a collection json in git to share them.

Kuule hain nussivan posted:

I quite like Insomnia.

Thanks! That one came up in my searching and it looked pretty nice.

Sirocco
Jan 27, 2009

HEY DIARY! HA HA HA!

Spatial posted:

Don't have time to go into any depth of analysis right now, but distorted drawing like that is usually down to the viewport/projection being incorrect or simply using the wrong texture handle. It sounds like accidentally using the sprite's texture when drawing the framebuffer would have that stretching effect.

If you haven't already, RenderDoc is an extremely useful debug tool. It allows you to step through all the rendering steps in a frame and check the parameters and visually check the textures used, showing the intermediate results all the way. I found it invaluable when experimenting with the stencil buffer. :v:

Thanks, I'll have a play about with some of these ideas. RenderDoc doesn't seem to like some of the OpenGL functions I'm using so I'll have a think about whether there's a way around that.

Sirocco
Jan 27, 2009

HEY DIARY! HA HA HA!
Well, I couldn't get to use RenderDoc because it objects to the various functions I'd been using to set up the projection matrix including LoadIdentity... doesn't seem to like glBegin or glEnd either. Fiddling with the projection matrix and the viewport doesn't seem to have changed anything either. I inspected the inputs and the position and UV co-ordinates definitely seem correct. The same inputs rendered to GL_TEXTURE0 appear in the correct place and in the correct size but when rendered to GL_TEXTURE1 just appears to fill the whole texture regardless of what position or size I pass through. I'm just completely baffled at this point and I can't really find much information on how to "inspect" a texture unit, if such a thing is possible, to find out what vital parameter is messed up somewhere, but then OpenGL documentation as a whole seems very opaque, or maybe it's just me.

Ihmemies
Oct 6, 2012

So I was studying CS in university in 2005-2012. Mainly I played wow and got nothing school related done after 2006. Well I studied to be an x-ray tech/nurse and now after 6 years of 1-1,5h one way commute (total 2-3h) I am ready to try CS again.

I am not very good at setting goals and studying completely independently, so that's why I'd like to go back to university and earn a degree for myself. I could then meet companies and people while studying and maybe get a job easier that way.

Are there any real risks with computer touching these days? I want a job with as short commute as possible, better salary and benefits. Healthy work/life balance and a career as far away from the healthcare hellscape.

X-rays are cool but I can't get a job with reasonable commute. I can work with software from my home so optimally my commute would be a few meters at maximum in future.

If AI takes over all the programming jobs I can go back to x-rays, or if the situation is dire, work at elderly care.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

Ihmemies posted:

So I was studying CS in university in 2005-2012. Mainly I played wow and got nothing school related done after 2006. Well I studied to be an x-ray tech/nurse and now after 6 years of 1-1,5h one way commute (total 2-3h) I am ready to try CS again.

I am not very good at setting goals and studying completely independently, so that's why I'd like to go back to university and earn a degree for myself. I could then meet companies and people while studying and maybe get a job easier that way.

Are there any real risks with computer touching these days? I want a job with as short commute as possible, better salary and benefits. Healthy work/life balance and a career as far away from the healthcare hellscape.

X-rays are cool but I can't get a job with reasonable commute. I can work with software from my home so optimally my commute would be a few meters at maximum in future.

If AI takes over all the programming jobs I can go back to x-rays, or if the situation is dire, work at elderly care.

The programming job is fundamentally one of continuous self learning. If that isn't something you're interested in, you will neither enjoy it or progress as fast as you or others expect.

CarForumPoster
Jun 26, 2013

⚡POWER⚡

leper khan posted:

The programming job is fundamentally one of continuous self learning. If that isn't something you're interested in, you will neither enjoy it or progress as fast as you or others expect.

This is true IME.

That said are there career risks compared to being an X-ray tech? Not really. You can teach yourself Python on self directed projects and get a job from just that doing some API gluing. It’s a blue collar job education level job that society perceives as white collar and pays decently to boot.

Volmarias
Dec 31, 2002

EMAIL... THE INTERNET... SEARCH ENGINES...
The hardest part will be getting your foot in the door if that college time didn't result in a degree. Many places demand a CS degree or similar as a screening function for HR, so you'll need to find a way to get past that so that work experience can cover for you going forward.

Ihmemies
Oct 6, 2012

The college time was completely wasted.

Anyways. I can go back and get a Master of Science from computer science, starting this fall. Like this but in another city: https://www.helsinki.fi/en/degree-programmes/computer-science-masters-programme I also need the lower degree of that but anyways Master's would be my goal.

University is free here so I just need to apply for it.

With a degree it may or may not be easier to get work. I don't really work because of the money, but because I need some structure in my life so I don't succumb to WoW again.

I fear basic python skills won't cut it when doom progresses in the world, but maybe more advanced skills will be in demand. I know I am late to this and HR will go like "wait your'e 40 years old with 0 years of experience?!?" but I very much hate my commute.

I also think x-ray tech job is extremely boring and repetitive in the long run and there are really no new challenging things to learn at my current work. So my current job is a dead end.

Ihmemies fucked around with this message at 05:14 on Jun 22, 2022

Computer viking
May 30, 2011
Now with less breakage.

Programmers are always useful for glueing things together, so things would need to go really Hollywood style bad before it becomes irrelevant. But in that case you'd be better off studying non-mechanised farming, anyway.

I guess you could try to aim for something where the experience you have is - or sounds - relevant. Bioinformatics and hoping to do something with medical image analysis may be too niche, but it is a field that exists.

There are also a lot of niches within hospital systems. I'm at OUS in Oslo, and we're looking for someone who is OK with databases and learning a janky BASIC dialect to tinker with and help users of our sample tracking system; clinical experience is absolutely a plus. The "innovation clinic" wants random networking and windows admin experience, but also programmers in general, perhaps with some "real" experience.

There's a whole department that makes software to support researchers - like a (C#) windows program that is a front end for a database, and a registry to safely store a mapping between pseudonymous IDs in research studies and the real person.

I'm an informatics MSc that dropped by a cancer research group to make a sample tracking system for a group of researchers, and ended up helping them write python and R and run the file and calculation servers.

All of these kind of require that you can adapt to new languages and have a rough familiarity with surrounding ideas (like databases and system administration). However, that is kind of the point of an informatics MSc; it's not really about directly teaching you one set of tools, but more about the meta level of teaching you to quickly pick up new-but-similar ones. Even if that's not explicitly written anywhere.

cheetah7071
Oct 20, 2010

honk honk
College Slice
Is there any way to configure git to handle committing to two repositories simultaneously in a coherent way? I have a project where the main branch is still in constant flux, but as part of a deadline issue I'm sprinting to make a version of it that's suitable for internal use before then, which would, in better circumstances, be a private fork of the main branch because it will have some stuff that doesn't belong in the public repository.

Ideally, I'd like a situation where by default, when I make a commit, for most of the files (the ones that belong in the main branch) it commits to both the main and the private versions, and for a small subset of the files, it only commits to the private version. Ideally I'd do this without having two copies of the code to keep in sync on my harddrive.

leper khan
Dec 28, 2010
Honest to god thinks Half Life 2 is a bad game. But at least he likes Monster Hunter.

cheetah7071 posted:

Is there any way to configure git to handle committing to two repositories simultaneously in a coherent way? I have a project where the main branch is still in constant flux, but as part of a deadline issue I'm sprinting to make a version of it that's suitable for internal use before then, which would, in better circumstances, be a private fork of the main branch because it will have some stuff that doesn't belong in the public repository.

Ideally, I'd like a situation where by default, when I make a commit, for most of the files (the ones that belong in the main branch) it commits to both the main and the private versions, and for a small subset of the files, it only commits to the private version. Ideally I'd do this without having two copies of the code to keep in sync on my harddrive.

Got operates on a single version of history at a time. You could maybe abuse hooks into doing something like what you're asking.

You could probably use submodules for this, but that will confuse anyone who isn't used to them

KillHour
Oct 28, 2007


leper khan posted:

Got operates on a single version of history at a time.

poo poo now I need to make a Git competitor called Got.

Computer viking
May 30, 2011
Now with less breakage.

KillHour posted:

poo poo now I need to make a Git competitor called Got.

Too late.

KillHour
Oct 28, 2007



Oh come on it's even built on top of Git. When is there going to be a change management system that treats Windows like a first class citizen and has a native release?

Computer viking
May 30, 2011
Now with less breakage.

KillHour posted:

Oh come on it's even built on top of Git. When is there going to be a change management system that treats Windows like a first class citizen and has a native release?

On the positive side, having two entirely separate code bases use the same repository format seems like a good thing in the long run - if nothing else it makes it feel like more of a standard and less of an internal format that happens to be openly documented.

As for a SCM with native windows support, it's been some years since MS retired VSS. I guess they just integrate git in Visual Studio these days?

camoseven
Dec 30, 2005

RODOLPHONE RINGIN'

KillHour posted:

Oh come on it's even built on top of Git. When is there going to be a change management system that treats Windows like a first class citizen and has a native release?

Can I interest you in TFS?

lifg
Dec 4, 2000
<this tag left blank>
Muldoon

cheetah7071 posted:

Is there any way to configure git to handle committing to two repositories simultaneously in a coherent way? I have a project where the main branch is still in constant flux, but as part of a deadline issue I'm sprinting to make a version of it that's suitable for internal use before then, which would, in better circumstances, be a private fork of the main branch because it will have some stuff that doesn't belong in the public repository.

Ideally, I'd like a situation where by default, when I make a commit, for most of the files (the ones that belong in the main branch) it commits to both the main and the private versions, and for a small subset of the files, it only commits to the private version. Ideally I'd do this without having two copies of the code to keep in sync on my harddrive.

This almost sounds like the Git Flow model. Most commits go to the “next release” branch. But patch commits go to straight to the main branch and also to the “next release” branch.

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆

cheetah7071 posted:

Is there any way to configure git to handle committing to two repositories simultaneously in a coherent way? I have a project where the main branch is still in constant flux, but as part of a deadline issue I'm sprinting to make a version of it that's suitable for internal use before then, which would, in better circumstances, be a private fork of the main branch because it will have some stuff that doesn't belong in the public repository.

Ideally, I'd like a situation where by default, when I make a commit, for most of the files (the ones that belong in the main branch) it commits to both the main and the private versions, and for a small subset of the files, it only commits to the private version. Ideally I'd do this without having two copies of the code to keep in sync on my harddrive.

Is there a reason why your fork needs to be an entirely separate repository and not just another branch in the first repo? As long as you make sure to make separate commits for changes touching the universal files vs the project-specific stuff you can just cherry-pick all the relevant stuff back to the main branch.

RPATDO_LAMD fucked around with this message at 02:28 on Jun 23, 2022

cheetah7071
Oct 20, 2010

honk honk
College Slice

RPATDO_LAMD posted:

Is there a reason why your 'fork' needs to be an entirely separate repository and not just another branch in the first repo? As long as you make sure to make separate commits for changes touching the universal files vs the project-specific stuff you can just cherry-pick them all back to the main branch some time later.

if there's a way to have the main branch be public (later down the road when it is public) and the internal one be private (but still hosted in a repository for sharing internally) then that's fine. I'm really a complete git newbie, I pretty much just use it as an overcomplicated way to back up my code right now

Since the main branch isn't actually public yet you're right that I can use the private one as the working branch for now, and just merge in all the public changes right before it's time to go public.

cheetah7071 fucked around with this message at 02:32 on Jun 23, 2022

RPATDO_LAMD
Mar 22, 2013

🐘🪠🍆
Git doesn't have any internal concept of public or private.
When you want to publicize things later down the road you can fork this current repo and delete all the branches and commits with private stuff in them. Hopefully by then you won't be simultaneously developing 2 versions of the same software? Otherwise it will be a pain and you'll have all the same problems you have right now all over again.

Blinkz0rz
May 27, 2001

MY CONTEMPT FOR MY OWN EMPLOYEES IS ONLY MATCHED BY MY LOVE FOR TOM BRADY'S SWEATY MAGA BALLS

cheetah7071 posted:

Is there any way to configure git to handle committing to two repositories simultaneously in a coherent way? I have a project where the main branch is still in constant flux, but as part of a deadline issue I'm sprinting to make a version of it that's suitable for internal use before then, which would, in better circumstances, be a private fork of the main branch because it will have some stuff that doesn't belong in the public repository.

Ideally, I'd like a situation where by default, when I make a commit, for most of the files (the ones that belong in the main branch) it commits to both the main and the private versions, and for a small subset of the files, it only commits to the private version. Ideally I'd do this without having two copies of the code to keep in sync on my harddrive.

So conceptually this works fine in Git because Git can handle multiple branches with differing commits and push branches to multiple remotes although like the previous poster mentioned, there's really no such concept of public or private in Git itself.

However, if you really want to do this, assuming you were conflating Git and GitHub, here's how you'd do it but be warned this is going to get really lovely, really quickly:

1. Create both repos, one public, one private
2. Create 2 separate default branches for each repo (public one is main, private one is develop or whatever)
3. Add both repos as separate git remotes (git remote add public git@github.com:yourorg/publicrepo.git and git remote add private git@github.com:yourorg/privaterepo.git)
4. Use a separate local branch that isn't mapped to any upstream remote to hopefully avoid confusion
5. Cherry pick changes off your separate local branch to each tracking branch depending on if the changes should be public or private
6. When you want to push to your public repo: git push public main
7: When you want to push to your private repo: git push private develop

But don't do this because juggling changes, cherry picking stuff, and making sure you don't accidentally push the private branch to the public repo is going to be super annoying. Also lmao if even consider trying this with more than 1 person.

cheetah7071
Oct 20, 2010

honk honk
College Slice
I was afraid that there's just no way to do it that isn't lovely

nielsm
Jun 1, 2009



If you can programmatically determine what changes need to go where, you could probably write a post-commit script and add to one of the repos, that would take the new commit and replicate the relevant parts in another working tree somewhere else.

LongSack
Jan 17, 2003

Not sure if this is the correct thread, but seems most likely, so here goes.

I’ve been doing a lot of full-stack development lately. I use VS2022 for the back-end api stuff, and VS Code for the front end. I’m using React, though I don’t think that’s relevant.

I find that as I work through the day, VS Code gets slower and slower, so that in the mid afternoon (or earlier) a save can take 20-30 seconds (the blame box in the lower right blames prettier), and the end of the day commit can take 3 minutes or more — and mind you this is a local commit only, I haven’t pushed the code to an external repo yet. Intellisense also gets less and less responsive, which is extremely irritating.

Is this normal? Is there something I should look at to try to fix it? I know the blame box points to Prettier, but I’m not sure that’s the actual culprit. I feel like as more and more changes get tagged into version control, it just gets slower.

I should note that this is on my fairly beefy desktop machine, a 16-core I9 based system with 32GB RAM, if that matters.

Is there a better alternative to VS Code? I’ve been looking at Jetbrains Rider which, while not free, is reasonably priced for an individual.

Ideas?

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
Doesn't sound normal to me unless your repo is massive, and even then I'm not sure I can come up with a reason for saving the file to take 30 seconds. Does it change if you quit and relaunch VS Code? Does VS slow down at all as the day goes on (I assume not since you didn't mention it)?

I would try turning off every plugin and resetting every non-default setting in VS Code and see if it gets better. Assuming it does, I'd turn one thing back (on) at a time until it goes to poo poo. Prettier sounds like a good thing to start with.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

LongSack posted:

Is this normal? Is there something I should look at to try to fix it? I know the blame box points to Prettier, but I’m not sure that’s the actual culprit. I feel like as more and more changes get tagged into version control, it just gets slower.
Ideas?

How do you know it's not prettier? (https://github.com/prettier/prettier-vscode/issues/1333)

LongSack
Jan 17, 2003


It may be, and there are a number of good suggestions in that thread, so thanks for that link.

I started playing around with Rider, but it is missing one feature (that people have been asking for for four years) - auto format on save, and i really miss that. It also occasionally has problems indenting JSX elements properly. VS Code has this problem, too, but at least the auto format on save fixes it.

camoseven
Dec 30, 2005

RODOLPHONE RINGIN'

LongSack posted:

It may be, and there are a number of good suggestions in that thread, so thanks for that link.

I started playing around with Rider, but it is missing one feature (that people have been asking for for four years) - auto format on save, and i really miss that. It also occasionally has problems indenting JSX elements properly. VS Code has this problem, too, but at least the auto format on save fixes it.

I could never go back to the bad old days of not having auto format on save. So much mental overhead saved!

necrotic
Aug 2, 2005
I owe my brother big time for this!
You can add a custom macro in rider and assign a keybind to have it run format and then save. I have this in Android Studio bound to cmd-shift-s

Super-NintendoUser
Jan 16, 2004

COWABUNGERDER COMPADRES
Soiled Meat
Can someone give me a primer on building json in bash using jq? I have a handle on parsing it, but basically I'm hitting an API that returns a bunch of json, and I only need one field from it, but I have to hit the API N times and build a json array of the results. Essentially the API returns details about a singular host, and I filter the output to just get the field I need. I can also iterate and parse a large list of hosts, and get back all the strings I want, but turning that into some jq is proving challenging.

I need this output:

./script.sh fieldWanted

code:
{
"host_1": "some_info_1", 
"host_2": "some_info_2",
"host_[i]N[/i]" : "some_info_[i]N[/i]"
}
What I'm doing now is just building a bash associative array, that is key value pairs, and I can dump that to shell, but this needs to be at scale N may be 50 or 100 iterations and be programmatically parse-able, so they asked me to switch it to returning json.

bash array version, it works but need to change it to json:

code:
declare -A result_set
for host in $HOSTS
do 
    log "Checking $host "
    endPoint="/json/api/"
    dbCheck=$(curl -s -X GET  "https://$host$endPoint" |jq)
    log "Result for $host = $(echo $dbCheck|jq -r .fieldWanted)"
    result_set+=(["$host "]="$(echo $dbCheck|jq -r .fieldWanted)")
done    

log "here is the result set:"
# it's a bash array so parsing is fun:

for elem in "${!result_set[@]}"
do
 echo "key : ${elem}" -- "value: ${result_set[${elem}]}"
done

So my pseudo-code for bash to output in jq:

code:

hosts="host_1 host_2 host_[i]N[/i] ..."

for host in $hosts
do
 	result=$(curl https://api/rest/endpoint/info&host=$host| jq .[].filter
	# the result is just a string "some_info_1"
	result_set=$(some jq here that builds an array { "$host" : "$result" } )
done

echo $result_set 
Beating my head against jq is fun, but I need to learn it.

Jabor
Jul 16, 2010

#1 Loser at SpaceChem

Jerk McJerkface posted:

Can someone give me a primer on building json in bash using jq? I have a handle on parsing it, but basically I'm hitting an API that returns a bunch of json, and I only need one field from it, but I have to hit the API N times and build a json array of the results. Essentially the API returns details about a singular host, and I filter the output to just get the field I need. I can also iterate and parse a large list of hosts, and get back all the strings I want, but turning that into some jq is proving challenging.

I need this output:

./script.sh fieldWanted

code:
{
"host_1": "some_info_1", 
"host_2": "some_info_2",
"host_[i]N[/i]" : "some_info_[i]N[/i]"
}

This isn't an array, for starters. An array would look like:

code:

[
"some_info_1",
"some_info_2",
(etc.)
]

Secondly, does this absolutely need to be done in bash instead of some more appropriate scripting language that isn't 30 years old?

Adbot
ADBOT LOVES YOU

Volguus
Mar 3, 2009
Python is a billion years old too, but it would definitely be more appropriate and just easier. But if you want to bang your head against it .. by all means. Just echoing the JSON is what I'd do, no need to invoke jq there for output.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply