Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Doctor w-rw-rw-
Jun 24, 2008
So, possibly a newbie question. I'm trying to figure out what the heck I'm doing wrong to cause a crash.

This works.
C++ code:
uint8_t *buffer = (uint8_t *)av_malloc(8192);
AVIOContext *avioContext = avio_alloc_context(buffer, 8192, 0, &bd, &read_packet, nullptr, nullptr);
This also works...
C++ code:
auto deleter = [](void *ptr){ blog(LOG_WARNING, "ptr = %p", ptr); if (ptr) av_free(ptr);  };
std::unique_ptr<uint8_t, decltype(deleter)> buffer(reinterpret_cast<uint8_t *>(av_malloc(8192)), deleter);
blog(LOG_WARNING, "buffer: %p", buffer.get());
AVIOContext *avioContext = avio_alloc_context(buffer.get(), 8192, 0, &bd, &read_packet, nullptr, nullptr);
This crashes.
C++ code:
auto deleter = [](void *ptr){ blog(LOG_WARNING, "ptr = %p", ptr); if (ptr) av_free(ptr);  };
std::unique_ptr<uint8_t, decltype(deleter)> buffer(reinterpret_cast<uint8_t *>(av_malloc(8192)), deleter);
AVIOContext *avioContext = avio_alloc_context(buffer.get(), 8192, 0, &bd, &read_packet, nullptr, nullptr);
Logs complain that the memory being freed wasn't alloced, or something. Is there some detail of smart pointer lifetime I'm missing here?

(I don't think it's terribly relevant, but as you might infer this is using ffmpeg libraries.)

Adbot
ADBOT LOVES YOU

Null Pointer
May 20, 2004

Oh no!

Doctor w-rw-rw- posted:

(I don't think it's terribly relevant, but as you might infer this is using ffmpeg libraries.)

It might be relevant. According to this, the library is allowed to free and reallocate the buffer that you pass to it. You're supposed to free avioContext->buffer instead of the original buffer. I've never used FFmpeg so I'm not sure if this is correct, but it would explain the double free.

Example #2 most likely also has this problem. The error is probably just not detected in this specific case; perfect fault detection is too expensive for a production allocator, so all you'll ever get is the subset of true positives that can be confirmed with low overhead. You can always try a debug allocator or dynamic analysis (e.g. valgrind) if you want to know for sure.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Null Pointer posted:

It might be relevant. According to this, the library is allowed to free and reallocate the buffer that you pass to it. You're supposed to free avioContext->buffer instead of the original buffer. I've never used FFmpeg so I'm not sure if this is correct, but it would explain the double free.

Yes, this is correct. A correct implementation would be something like:
C++ code:
auto deleter = [](AVIOContext *ptr){ av_free(ptr->buffer); av_free(ptr); };
uint8_t *buffer = (uint8_t *)av_malloc(8192);
if (!buffer) { ... }
std::unique_ptr<AVIOContext, decltype(deleter)> avioContext(avio_alloc_context(buffer, 8192, 0, &bd, &read_packet, nullptr, nullptr), deleter);
if (!avioContext) { free(buffer); }

Doctor w-rw-rw-
Jun 24, 2008
Ah, thanks! The function is somewhat long and involved, so I had been hoping to avoid figuring out cleanup by just returning and cleaning up on scope exit.

netcat
Apr 29, 2008
I have to include some code that still uses tr1, and this causes some redefinition errors in google test. This can be fixed by defining these macros:

-DGTEST_HAS_TR1_TUPLE=0
-DGTEST_USE_OWN_TR1_TUPLE=1

but why does it happen in the first place? Isn't everything in separate namespaces?

Chuu
Sep 11, 2004

Grimey Drawer
What's considered the best unit test framework these days? Specifically, I am trying to construct a basic testing framework for a project that already has several KLOC of code with just BOOST_ASSERTs sprinkled around to keep it in check.

sarehu
Apr 20, 2007

(call/cc call/cc)

Chuu posted:

What's considered the best unit test framework these days? Specifically, I am trying to construct a basic testing framework for a project that already has several KLOC of code with just BOOST_ASSERTs sprinkled around to keep it in check.

I think googletest works fine. Easy setup, painless to use, I was happy with it.

Ralith
Jan 12, 2011

I see a ship in the harbor
I can and shall obey
But if it wasn't for your misfortune
I'd be a heavenly person today
Google test is decent and usable. There's a bunch of interesting-looking C++11/14 libs on github that I haven't yet tinkered with as well, which promise to be much less macro-heavy and perhaps more actively maintained.

Jinx
Sep 9, 2001

Violence and Bloodshed

Ralith posted:

Google test is decent and usable. There's a bunch of interesting-looking C++11/14 libs on github that I haven't yet tinkered with as well, which promise to be much less macro-heavy and perhaps more actively maintained.

Google test (and by extension google mock) are generally quite good, and generally bug free. They are a bit slack with updating when compilers add new language features (e.g. override), however it's not that big a deal. We have our own implementation of a std::uniqe_ptr wrapper, but I'm not sure if anything has been done in Google's framework. Google test and mock also have a heap of ways of dealing with side effects, which is very useful if you need to deal with C lang based interfaces. Also support parameterized (but not typed parameterized) tests.

The main disadvantages are marcos, and no way to mock out free functions (other than doing it yourself with namespace tricks). Some other frameworks allow you to mock out free functions, but their implementations are terrifying.

Jinx fucked around with this message at 07:27 on Oct 23, 2015

Sagacity
May 2, 2003
Hopefully my epitaph will be funnier than my custom title.
I quite enjoy bandit.

hayden.
Sep 11, 2007

here's a goat on a pig or something
I have a little OLED hooked up to my Raspberry Pi like this:



The library for it is written in C++. I'm a web developer and know nothing about how to approach this project:

I want the OS's console to be displayed on this little OLED. I guess a way for C++ to see what the console currently is, including what is being typed on it. Does anyone know how I can do that? Just the getting console info part, I can figure out the printing to the OLED probably. Whatever the final script winds up being is just something I'll have autolaunch at start, so I know I won't get the pre-boot stuff, but I am ok with that.

nielsm
Jun 1, 2009



Two ways I see of getting a console on an extra display like that:

1) Write a kernel console driver, that actually uses the display as system console. (Keep in mind Linux kernel is C, not C++.)

2) The Xterm approach, write a usermode application that launches a new login shell and renders the input/output for that process on the display.

Option 2 can optionally be run through init or similar, so you always get a console on it.
Note that for option 2, a regular keyboard still counts as belonging to the system console, and if the system console has multiple virtual consoles, as Linux systems often have, your application only runs on one virtual console so you can actually switch "keyboard focus" away.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!
I'm getting stuck in circular header dependency hell due to inline code requiring the definitions of things that are themselves dependent on the type that the header is for.

Is there a good pattern for breaking the circular dependencies in something like this? i.e.:

code:
A.h:
#pragma once
struct A { DoSomething(); };
#include "B.h"
inline void A::DoSomething() { B b; }

B.h:
#pragma once
#include "A.h"
struct B { A a; };

Test.cpp:
#include "B.h"

b0lt
Apr 29, 2005

OneEightHundred posted:

I'm getting stuck in circular header dependency hell due to inline code requiring the definitions of things that are themselves dependent on the type that the header is for.

Is there a good pattern for breaking the circular dependencies in something like this? i.e.:

code:
A.h:
#pragma once
struct A { DoSomething(); };
#include "B.h"
inline void A::DoSomething() { B b; }

B.h:
#pragma once
#include "A.h"
struct B { A a; };

Test.cpp:
#include "B.h"

Pull the struct definition out into its own header:
code:
A.h
#pragma once
#include "private/A.h"
#include "B.h"
inline void A::DoSomething() { B b; }

private/A.h
#pragma once
struct A { DoSomething(); }

B.h
#pragma once
#include "private/A.h"
struct B { A a; }

Hubis
May 18, 2003

Boy, I wish we had one of those doomsday machines...

OneEightHundred posted:

I'm getting stuck in circular header dependency hell due to inline code requiring the definitions of things that are themselves dependent on the type that the header is for.

Is there a good pattern for breaking the circular dependencies in something like this? i.e.:

code:
A.h:
#pragma once
struct A { DoSomething(); };
#include "B.h"
inline void A::DoSomething() { B b; }

B.h:
#pragma once
#include "A.h"
struct B { A a; };

Test.cpp:
#include "B.h"

Not to be a style nudge (and recognizing this may be existing code you just can't re-factor), but every case I can think of where you'd do this you will never including A without B (as is explicit in your include file anyways) so I would say there's a very strong argument that the two files should just be combined into a larger "conceptually-organized" file rather than separate "implementation-organized" ones:

code:
<AB_Task.h>
#pragma once
// A
struct A { DoSomething(); };
// B
struct B { A a; };

#include "AB_Task.inl"

<AB_Task.inl>
inline void A::DoSomething() { B b; }

<Test.cpp>
#include "AB_Task.h"
Microsoft does this in their D3D 11 Math libraries, for example -- they have all their various Matrices, Vectors, and Quaternions defined in the header, and then an included inline file at the end that defines various member and friend functions that may rely on several of them. It doesn't fit the standard "one class per header" approach, but I find that falls apart quickly with code complexity anyways and you end up with either a web of include-in-header dependencies that can be a pain to unravel, or a huge block of includes at the top of every CPP file that is subject to include-order issues that are equally opaque.

hayden.
Sep 11, 2007

here's a goat on a pig or something

nielsm posted:

Two ways I see of getting a console on an extra display like that:

1) Write a kernel console driver, that actually uses the display as system console. (Keep in mind Linux kernel is C, not C++.)

2) The Xterm approach, write a usermode application that launches a new login shell and renders the input/output for that process on the display.

Option 2 can optionally be run through init or similar, so you always get a console on it.
Note that for option 2, a regular keyboard still counts as belonging to the system console, and if the system console has multiple virtual consoles, as Linux systems often have, your application only runs on one virtual console so you can actually switch "keyboard focus" away.

Thanks! There's a library someone wrote already that I will try first, and if not I will look into these.

Diametunim
Oct 26, 2010
Seeking some advice on how to clean up some code I recently wrote for my Operating Systems class. Everything in my assignment "Pirates.c" appears to function to the assignment requirements but I feel like I wrote the solution like poo poo; anyone mind pointing out some glaringly obvious things I can clean up? I've attached an incorrect solution but it uses a struct for each threads data. I believe that idea might lead to a better solution but could anyone take a look at it and tell me if that method is even possible for this assignment.

I also need to create a secondary program that fails to protect the critical section (The num of pearls in this case). Meaning more than one thread would be able to take from the "pearls" at the same time resulting in an incorrect output. I've tried removing the mutex locks from my program and it still functions correctly, what am I missing here?

Please keep in mind, I've never threaded anything in my life, I still don't fully understand some of the basics, let alone the intricate details. Anyone have any decent reading to fill in some of the blanks for me? I'm pretty much teaching myself here.

Code GIST
e:If you would like to compile "gcc pirates.c -lpthread -lm"

nielsm
Jun 1, 2009



Your function prototypes for the thread main functions are wrong. A pthreads thread function also takes a parameter, that is pointer-sized, and contains data passed to the pthread_create call. You should really use the correct function prototype, avoid having to typecast the function pointer, and be able to have just one single "pirate" thread. That is, pass each thread as parameter how many pearls it can take, and what its name is.

Sex Bumbo
Aug 14, 2004
I've been using msvc which I know does a lot of non-standard stuff, but why doesn't the following code work on gcc?
code:
struct A { 
    A(int) { }
};

struct B {
    B(A a) { }
};

void main()
{
    A a = 5; // okay
    B b = a; // okay
    B c = 5; // not okay
}
Is there a way I can make this work? Without having to write a new B constructor.

Bonfire Lit
Jul 9, 2008

If you're one of the sinners who caused this please unfriend me now.

Sex Bumbo posted:

I've been using msvc which I know does a lot of non-standard stuff, but why doesn't the following code work on gcc?

Is there a way I can make this work? Without having to write a new B constructor.

It doesn't work on gcc because the compiler isn't supposed to do multiple implicit user-defined conversions on a value (from int to A, then from A to B). This, of course, does not mean that you can't write explicit conversions:
C++ code:
B c = A(5);

Sex Bumbo
Aug 14, 2004

Bonfire Lit posted:

It doesn't work on gcc because the compiler isn't supposed to do multiple implicit user-defined conversions on a value (from int to A, then from A to B). This, of course, does not mean that you can't write explicit conversions:
C++ code:
B c = A(5);

What I mostly want is an unobtrusive way to prevent msvc users from writing code that will break on gcc. Simply using the above notation wouldn't prevent them from writing it incorrectly and building cleanly on msvc.

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed

Sex Bumbo posted:

What I mostly want is an unobtrusive way to prevent msvc users from writing code that will break on gcc.
You can't. People who are not intimately familiar with msvc's quirks will not be able to reliably write code which works in gcc without testing it in gcc.

OneEightHundred
Feb 28, 2008

Soon, we will be unstoppable!

Plorkyeran posted:

People who are not intimately familiar with msvc's quirks will not be able to reliably write code which works in gcc without testing it in gcc.
People who are intimately familiar with MSVC's quirks will still not be able to reliably write code which works in gcc unless the word "template" appears exactly zero times.

Also the main way to avoid unexpected behavior from implicit conversions is to use "explicit" and not use implicit conversions.

Hubis posted:

Not to be a style nudge (and recognizing this may be existing code you just can't re-factor)
It's generated code (for a .NET to C++ transpiler), so on one hand refactoring is easy, but on the other hand it's more important that it consistently compiles than that it's neat.

Jewel
May 2, 2009

Multiple implicit conversion is scary as hell to me anyway. A might have a really big overhead. B = 5 would construct an A and then copy it to B completely silently without you even knowing it was happening. And if you did know, why didn't you just make the A and assign it (= A(5)) in the first place.

Also even without overhead it still seems hosed to me because A could do whatever wacky things with the arguments to then pass in to B, and you wouldn't even know it was happening unless you knew B didn't have an int constructor and it was being delegated through A. Sounds like the source of some potential really nasty bugs, where A is buggy and you never made an A yourself but now B is being silently bugged out because of it.

Jewel fucked around with this message at 04:01 on Oct 27, 2015

Rottbott
Jul 27, 2006
DMC
That's why constructors like that should be marked as explicit unless you have a good reason not to.

Vanadium
Jan 8, 2005

Diametunim posted:

Seeking some advice on how to clean up some code I recently wrote for my Operating Systems class. Everything in my assignment "Pirates.c" appears to function to the assignment requirements but I feel like I wrote the solution like poo poo; anyone mind pointing out some glaringly obvious things I can clean up? I've attached an incorrect solution but it uses a struct for each threads data. I believe that idea might lead to a better solution but could anyone take a look at it and tell me if that method is even possible for this assignment.

I also need to create a secondary program that fails to protect the critical section (The num of pearls in this case). Meaning more than one thread would be able to take from the "pearls" at the same time resulting in an incorrect output. I've tried removing the mutex locks from my program and it still functions correctly, what am I missing here?

Please keep in mind, I've never threaded anything in my life, I still don't fully understand some of the basics, let alone the intricate details. Anyone have any decent reading to fill in some of the blanks for me? I'm pretty much teaching myself here.

Code GIST
e:If you would like to compile "gcc pirates.c -lpthread -lm"

Do you really need both a mutex and a condition variable to wait on the "occupied" value? Doesn't the mutex being locked already model the condition that only one pirate (thread) can be in the cave (critical section)?

I think your problem in Incorrect.c might just be that you're doing "occupied = 1; if (occupied == 1) {"? Like, each thread is gonna block itself. Otherwise it looks about right.

Also please compile with warnings. :sun:

Sex Bumbo
Aug 14, 2004

Jewel posted:

Multiple implicit conversion is scary as hell to me anyway. A might have a really big overhead.

The above code would work fine if I wrote

B c(5);
or
B c = { 5 };

instead of
B c = 5;

which is a little puzzling to me. But also that seems like it's still going to incorporate the potential problems you mentioned -- it's not at all obvious by looking at that that an A constructor is being called, right? I suppose enforcing a style of always making explicit constructors would solve it?

The same issue can come up beyond initializing a variable:
void Foo(B);
Foo(5); // works on msvc
Foo({5}); // works on gcc

However, whoever wrote B should have some idea of what A is doing in its constructors because it's using it. It's not like some mystery code is being wedged in between the two, at least in this case.

Sex Bumbo fucked around with this message at 22:37 on Oct 27, 2015

Jinx
Sep 9, 2001

Violence and Bloodshed
Good code or best code?

code:
typedef std::function<void(int, int, int)> MyFunctor;

struct MyClass {

    MyClass(std::shared_ptr<MyFunctor> f) : f_(f) {}

    void is_do() {
        MyFunctor & f = *f_;
        if ( static_cast<bool>( f ) )
           f();
    }

private:
    std::shared_ptr<MyFunctor> f_;
};

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

Mozilla's rr reverse debugger has a stable release with the reversing stuff fully enabled (they describe being able to reverse from exit to invocation of main for a Firefox run): http://robert.ocallahan.org/2015/10/rr-40-released-with-reverse-execution.html

I've only played with it a little bit, but it is pretty incredible so far.

elite_garbage_man
Apr 3, 2010
I THINK THAT "PRIMA DONNA" IS "PRE-MADONNA". I MAY BE ILLITERATE.
I'm currently having some trouble converting a 32 bit string into a 32 bit unsigned int

code:
//convert a 4 byte string chunk to 32 bit uint
	for (int i = 0; i < size; i += 4){
		std::string chunkString = paddedInput.substr(i, 4);
		std::stringstream chunkStream(chunkString);
		std::bitset<32> chunkWord;	
		chunkStream >> chunkWord;                           //this does nothing
		msgBlock[counter] = chunkWord.to_ulong();
		counter++;
	}
The contents get copied to chunkStream, and I verified it by printing out chunkStream.str(), but when I try to convert it to a bitset, the bitset remains at 0.

I've tried using std::bitset<32> chunkWord(chunkString) but I get an invalid argument exception. Any help would be great.

sarehu
Apr 20, 2007

(call/cc call/cc)
Bitsets do input/output with strings using the ASCII digits "0" and "1" (or some locale-specific digits!), not the raw bits of the characters. You'd be better off casting the characters in the std::string to unsigned char, and casting that to uint32_t, and then combining them together with bitwise ops. Or use a union or something, depending on whether you want platform-specific byte order...

sarehu fucked around with this message at 19:58 on Oct 31, 2015

JawKnee
Mar 24, 2007





You'll take the ride to leave this town along that yellow line
I'm currently working on an assignment for a graphics class using Opengl, but I've run into a snag. I want to use glPushMatrix/glPopMatrix to affect translations/rotations on individual objects in my scene, but... well it isn't having any effect. My display function currently looks like:

code:
void display()
{
	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
	glUniform3fv( theta, 1, Theta );

	glUniform1i(locxsize, xsize); // x and y and z sizes are passed to the shader program to maintain shape of the vertices on screen
	glUniform1i(locysize, ysize);
	glUniform1i(loczsize, zsize);
	
	glBindVertexArray(vaoIDs[1]); // Bind the VAO representing the front of the board 
	glDrawArrays(GL_TRIANGLES, 0, 1200); //Draw the board front

	glBindVertexArray(vaoIDs[2]); // Bind the VAO representing the current tile
	glDrawArrays(GL_TRIANGLES, 0, 4*36); // Draw the current tile 

	glBindVertexArray(vaoIDs[3]); // Bind the VAO representing the back of the board
	glDrawArrays(GL_TRIANGLES, 0, 1200); // Draw the board back

	glBindVertexArray(vaoIDs[4]); // Bind the VAO representing left of the board
	glDrawArrays(GL_TRIANGLES, 0, 1200); // Draw the board left

	glBindVertexArray(vaoIDs[5]); // Bind the VAO representing right of the board
	glDrawArrays(GL_TRIANGLES, 0, 1200); // Draw the board right

	glBindVertexArray(vaoIDs[6]); // Bind the VAO representing the board bottom
	glDrawArrays(GL_TRIANGLES, 0, 1200); // Draw the board bottom

	glBindVertexArray(vaoIDs[7]); // Bind the VAO representing top of the board
	glDrawArrays(GL_TRIANGLES, 0, 1200); // Draw the board top

	glBindVertexArray(vaoIDs[0]); // Bind the VAO representing the grid lines (to be drawn on top of everything else)
	glDrawArrays(GL_LINES, 0, 128); // Draw the grid lines

	glutSwapBuffers();
}
with rotation/translation being handled for the whole scene via the vertex shader. I'm currently trying to rotate the current tile object like so:

code:
void display()
{
	...

	glPushMatrix();
		glRotatef(5.0, 5.0, 5.0, 0.0);	
		glBindVertexArray(vaoIDs[2]); // Bind the VAO representing the current tile
		glDrawArrays(GL_TRIANGLES, 0, 4*36); // Draw the current tile 
	glPopMatrix();

	...

}
But this does nothing. The scene displays as before.

I've read some examples that have the matrix mode set to GL_MODELVIEW and some that don't, but setting it to Model View right before pushing the matrix, and then back to Projection after popping the matrix does nothing either. What am I missing here?

Other potentially pertinent info:

Main
code:
int main(int argc, char **argv)
{
	glutInit(&argc, argv);
	glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE | GLUT_DEPTH);
	glutInitWindowSize(xsize, ysize);
	glutInitWindowPosition(680, 178); 
	glutCreateWindow("Game");
	glewInit();
	init();

	// Callback functions
	glutDisplayFunc(display);
	glutReshapeFunc(reshape);
	glutSpecialFunc(special);
	glutSpecialUpFunc(special_up);
	glutKeyboardFunc(keyboard);
	glutIdleFunc(idle);
	glutTimerFunc(1000, timedMove, 0);

	glutMainLoop(); // Start main loop
	return 0;
}
vshader.glsl
code:
#version 130

in vec4 vPosition;
in vec4 vColor;
out vec4 color;

uniform vec3 theta;

uniform int xsize;
uniform int ysize;
uniform int zsize;

void main() 
{
	vec3 angles = radians( theta );
	vec3 c = cos( angles );
	vec3 s = sin( angles );

	mat4 rz = mat4( c.z, -s.z, 0.0, 0.0, 
			s.z, c.z, 0.0, 0.0, 
			0.0, 0.0, 1.0, 0.0, 
			0.0, 0.0, 0.0, 1.0 );

	mat4 rx = mat4( 1.0, 0.0, 0.0, 0.0, 
			0.0, c.x, -s.x, 0.0, 
			0.0, s.x, c.x, 0.0, 
			.0, 0.0, 0.0, 1.0 );

	mat4 ry = mat4( c.y, 0.0, s.y, 0.0, 
			0.0, 1.0, 0.0, 0.0, 
			-s.y, 0.0, c.y, 0.0, 
			0.0, 0.0, 0.0, 1.0 );

	mat4 scale = mat4(2.0/xsize, 	0.0, 		0.0, 		0.0,
			  0.0, 		2.0/ysize, 	0.0, 		0.0,
			  0.0, 		0.0, 		2.0/zsize, 	0.0,
			  0.0, 		0.0, 		0.0, 		1.0 );

	// First, center the image by translating each vertex by half of the original window size
	// Then, multiply by the scale matrix to maintain size after the window is reshaped
	vec4 newPos = vPosition + vec4(-200, -360, 0, 0);
	gl_Position = scale * ry * rx * rz * newPos; 

	color = vColor;	
}
I feel like I must be missing something simple wrt using the push/pop functions properly.

JawKnee fucked around with this message at 20:21 on Oct 31, 2015

nielsm
Jun 1, 2009



JawKnee posted:

I'm currently working on an assignment for a graphics class using Opengl, but I've run into a snag. I want to use glPushMatrix/glPopMatrix to affect translations/rotations on individual objects in my scene, but... well it isn't having any effect.

I feel like I must be missing something simple wrt using the push/pop functions properly.

I don't really know much about OpenGL, but aren't glPushMatrix/glPopMatrix from the old-style fixed-function rendering pipeline? Which should be inactive since you're using a vertex shader?

pseudorandom name
May 6, 2007

JawKnee posted:

I feel like I must be missing something simple wrt using the push/pop functions properly.

Your vertex shader doesn't use any of the builtin matrices so pushing and popping a matrix doesn't do anything.

JawKnee
Mar 24, 2007





You'll take the ride to leave this town along that yellow line

nielsm posted:

I don't really know much about OpenGL, but aren't glPushMatrix/glPopMatrix from the old-style fixed-function rendering pipeline? Which should be inactive since you're using a vertex shader?

Hmm, yeah it appears so. So many deprecated functions!

elite_garbage_man
Apr 3, 2010
I THINK THAT "PRIMA DONNA" IS "PRE-MADONNA". I MAY BE ILLITERATE.

sarehu posted:

Bitsets do input/output with strings using the ASCII digits "0" and "1" (or some locale-specific digits!), not the raw bits of the characters. You'd be better off casting the characters in the std::string to unsigned char, and casting that to uint32_t, and then combining them together with bitwise ops. Or use a union or something, depending on whether you want platform-specific byte order...

Hey, thanks for the suggestion. Here's what i ended up doing to get it working:

code:
//convert a 4 byte string chunk to 32 bit uint
	for (int padIt = 0; padIt < size; padIt += 4){
		
		//get next 4 byte block
		std::string chunkString = paddedInput.substr(padIt, 4);
		std::string binaryChunk;
		int chunkSize = chunkString.length();

		//convert ascii string to it's binary representation
		for (std::size_t stringIt = 0; stringIt < chunkSize; stringIt++){
			std::bitset<8> byte(chunkString.c_str()[stringIt]);
			binaryChunk += byte.to_string();
		}

		//take 32-bit binary result then convert it to 32-bit unint
		std::bitset<32> chunkWord(binaryChunk);
		msgBlock[counter] = chunkWord.to_ulong();
		std::cout << " " << chunkString << " " << msgBlock[counter];
		counter++;
	}

Doc Block
Apr 15, 2003
Fun Shoe

JawKnee posted:

Hmm, yeah it appears so. So many deprecated functions!

Nothing in your shader accesses the fixed-function matrices anyway.

Compute the matrixes you need on the CPU, since you typically only need to change them once per frame, and sometimes even less often than that (like the projection matrix), and pass them into your vertex shader as uniforms. That way you aren't recomputing them for every vertex.

Pass in the modelViewProjection matrix, plus occasionally the model matrix and/or viewProjection matrix and a normal matrix depending on what lighting effects etc you're going for.

And you'll save yourself a lot of headaches in the future of you learn the difference between the model, view, and projection matrix (and why the OpenGL fixed-function pipeline combined the model & view matrices into one modelView matrix). I speak from experience :)

b0lt
Apr 29, 2005

elite_garbage_man posted:

Hey, thanks for the suggestion. Here's what i ended up doing to get it working:

code:
//convert a 4 byte string chunk to 32 bit uint
	for (int padIt = 0; padIt < size; padIt += 4){
		
		//get next 4 byte block
		std::string chunkString = paddedInput.substr(padIt, 4);
		std::string binaryChunk;
		int chunkSize = chunkString.length();

		//convert ascii string to it's binary representation
		for (std::size_t stringIt = 0; stringIt < chunkSize; stringIt++){
			std::bitset<8> byte(chunkString.c_str()[stringIt]);
			binaryChunk += byte.to_string();
		}

		//take 32-bit binary result then convert it to 32-bit unint
		std::bitset<32> chunkWord(binaryChunk);
		msgBlock[counter] = chunkWord.to_ulong();
		std::cout << " " << chunkString << " " << msgBlock[counter];
		counter++;
	}

or

code:
    std::vector<uint32_t> msgBlock(paddedInput.length() / 4);
    memcpy(msgBlock.data(), paddedInput.data(), paddedInput.length());

sarehu
Apr 20, 2007

(call/cc call/cc)
That alternative will have different behavior on little-endian systems, if I'm reading the documentation correctly.

Adbot
ADBOT LOVES YOU

JawKnee
Mar 24, 2007





You'll take the ride to leave this town along that yellow line

Doc Block posted:

Nothing in your shader accesses the fixed-function matrices anyway.

Compute the matrixes you need on the CPU, since you typically only need to change them once per frame, and sometimes even less often than that (like the projection matrix), and pass them into your vertex shader as uniforms. That way you aren't recomputing them for every vertex.

Pass in the modelViewProjection matrix, plus occasionally the model matrix and/or viewProjection matrix and a normal matrix depending on what lighting effects etc you're going for.

And you'll save yourself a lot of headaches in the future of you learn the difference between the model, view, and projection matrix (and why the OpenGL fixed-function pipeline combined the model & view matrices into one modelView matrix). I speak from experience :)

Thanks for the advice, I appreciate it. I'm currently working on implementing my own matrix stack and it appears to be working as intended so far.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply