|
What is the difference between an int and an unsigned int anyways? I guess I should know this and google it to prevent shame.
|
# ? Apr 8, 2014 06:26 |
|
|
# ? Jun 8, 2024 07:46 |
|
Tusen Takk posted:What is the difference between an int and an unsigned int anyways? (Assuming 32 bit ints.)
|
# ? Apr 8, 2014 06:33 |
|
Hahaha, holy poo poo, my TA for data structures and algorithms posted this to the class's discussion board:quote:For those who bother by using std::xxx, you might add "using namespace std;" for a simpler name space, i.e. xxx instead of std::xxx.
|
# ? Apr 8, 2014 13:07 |
|
Tusen Takk posted:What is the difference between an int and an unsigned int anyways? An "int" is a signed integer, meaning it represents a range of numbers that includes negative numbers. An "unsigned int" can represent only nonnegative integers. (If the range of nonnegative values offered by a signed int on your platform is abundant for your purposes, then you could still use an unsigned int to store a value that ought to always be nonnegative, if it fits your needs.) As for the advisability of using signed or unsigned ints for particular purposes (absent a real need for a particular range of representable values), I don't know, as I am a newb when it comes to low-level languages like C++.
|
# ? Apr 8, 2014 13:12 |
|
hooah posted:Hahaha, holy poo poo, my TA for data structures and algorithms posted this to the class's discussion board: It's certainly a code smell, but it's not same level of horror as having using namespace std in a header file.
|
# ? Apr 8, 2014 13:15 |
|
Edison was a dick posted:It's certainly a code smell, but it's not same level of horror as having using namespace std in a header file. Except she didn't specify where to use it; she just said to use it in general. The projects we're doing have source and header files.
|
# ? Apr 8, 2014 13:29 |
|
Edison was a dick posted:It's certainly a code smell, but it's not same level of horror as having using namespace std in a header file. As a beginning cs student what's wrong with this? Because I've been doing this, now I feel bad.
|
# ? Apr 8, 2014 16:26 |
|
It pollutes the namespace of every consumer of your header with the crap you pulled in with the using directive, leading to unexpected collisions. e: here's an example of what can go wrong in your_header.h C++ code:
C++ code:
Blotto Skorzany fucked around with this message at 16:42 on Apr 8, 2014 |
# ? Apr 8, 2014 16:35 |
|
Is there a name for this? You have an array of n objects. Some are red, some are blue. You want to sort the array so that the blue objects are at the start and the red objects are at the end. You do not care whether two red objects or two blue objects stay in the same relative position. For simplicity assume that you know up front the number of red and the number of blue objects, in particular you know whether either is zero (in which case the array obviously is already sorted). You initialise two counters, i and j say. i is initialised to zero and j is initialised to n - 1. i counts upwards until the ith array element is found to be red. Then j counts downwards until the jth array element is found to be blue. Then you check whether j is less than i. If it is then you are finished, otherwise swap the ith and jth elements and go back to incrementing i.
|
# ? Apr 8, 2014 16:40 |
|
Hammerite posted:Is there a name for this? "Unstable sort". Nb. that people will be less reticent to assist w/ homework problems when you're explicit about their nature
|
# ? Apr 8, 2014 16:44 |
|
Otto Skorzeny posted:"Unstable sort". I apologise. I did not mean to deceive anyone. I will be explicit in future if I am asking in relation to something similar.
|
# ? Apr 8, 2014 16:47 |
|
Re: using namespace, there are two problem cases that tend to crop up if you just throw using directives around everywhere. One is having using namespace in a header as Skorzeny mentioned and the other is putting it before includes in a .cpp file. Writing:C++ code:
There's nothing particularly wrong with using namespace in a .cpp file where you know you don't have name collisions and are making frequent use of things in that namespace, even if that namespace happens to be std and you're making frequent use of the STL. Xerophyte fucked around with this message at 18:50 on Apr 8, 2014 |
# ? Apr 8, 2014 17:22 |
|
shrughes posted:Unsigned int is the maximal code smell as far as unsignedness goes. I'm curious, why don't you like unsigned int? I use it a fair bit for "unsigned, natural machine word size". I like the "no negatives" documentation element, and using the natural size would keep the compiler from having to generate masking or whatever like you could see with short or whatever (perhaps more historically than now).
|
# ? Apr 8, 2014 17:30 |
|
I am writing some code which will deal with large-ish (a few thousand members) arrays of small integers (like, no more than 4 digits). Should I use short, or just go with int? I know that short is at least 16 bits, so there is no question of it being big enough, I just wonder if it will be looked at as premature optimisation and/or an ignorant ineffectual attempt to optimise (because I don't know whether it, in fact, optimises anything) Disclosure: I am writing code for a evaluation exercise from a company to which I am applying for a job.
|
# ? Apr 8, 2014 18:05 |
Hammerite posted:I am writing some code which will deal with large-ish (a few thousand members) arrays of small integers (like, no more than 4 digits). Should I use short, or just go with int? I know that short is at least 16 bits, so there is no question of it being big enough, I just wonder if it will be looked at as premature optimisation and/or an ignorant ineffectual attempt to optimise (because I don't know whether it, in fact, optimises anything) A few thousand 32 bit ints still make for less than 100 kb of memory. A modern system won't even bat an eye at it. Call back when you get into several million items.
|
|
# ? Apr 8, 2014 18:27 |
|
Hammerite posted:I am writing some code which will deal with large-ish (a few thousand members) arrays of small integers (like, no more than 4 digits). Should I use short, or just go with int? I know that short is at least 16 bits, so there is no question of it being big enough, I just wonder if it will be looked at as premature optimisation and/or an ignorant ineffectual attempt to optimise (because I don't know whether it, in fact, optimises anything) If this is for an embedded system, using the smallest type that can hold all required values makes sense. Otherwise, I'd call it premature optimization.
|
# ? Apr 8, 2014 18:42 |
|
Subjunctive posted:I'm curious, why don't you like unsigned int? I use it a fair bit for "unsigned, natural machine word size". I like the "no negatives" documentation element, and using the natural size would keep the compiler from having to generate masking or whatever like you could see with short or whatever (perhaps more historically than now). It's a bad smell because the benefits are weak arguments like this one. The documentation advantage is illusory, it's the inverse Ackermann function of advantages. Almost every time I've seen an unsigned int, the code was brittle against values greater than 2 billion anyway because somewhere, it gets converted to an int. Or it gets compared against an int and the person writing the code didn't have warnings turned on, or there's casting on all the comparisons. Or it's being used in places where a size_t would be the correct choice, or a specific choice of uintN_t.
|
# ? Apr 8, 2014 18:43 |
|
Hammerite posted:I am writing some code which will deal with large-ish (a few thousand members) arrays of small integers (like, no more than 4 digits). Should I use short, or just go with int? So the real question is, how is the size of the array actually determined? Assuming it's not hardcoded, there's probably some function that populates the array and indicates its size (number of elements). Whatever integer type returned by the "size" function is a good candidate. If the array is hardcoded, I typically compute the number of elements as "sizeof(array)/sizeof(array[0])". Since the result type of the sizeof operator is size_t, then size_t would be the appropriate index type. Otherwise, in absence of a specific circumstance that demands a better choice, size_t is usually a good answer as it's sufficiently large to index any byte array that can fit in the process VM space. Hammerite posted:I know that short is at least 16 bits, so there is no question of it being big enough, I just wonder if it will be looked at as premature optimisation and/or an ignorant ineffectual attempt to optimise (because I don't know whether it, in fact, optimises anything) ExcessBLarg! fucked around with this message at 19:21 on Apr 8, 2014 |
# ? Apr 8, 2014 19:16 |
|
shrughes posted:Or it's being used in places where a size_t would be the correct choice, or a specific choice of uintN_t. Yeah, that's true. Guess I have a decade of habit to reform!
|
# ? Apr 8, 2014 19:16 |
|
ExcessBLarg! posted:If the array is hardcoded, I typically compute the number of elements as "sizeof(array)/sizeof(array[0])". Since the result type of the sizeof operator is size_t, then size_t would be the appropriate index type. I think he's asking about the storage type rather than the index type; the latter is size_t all the way IMO.
|
# ? Apr 8, 2014 19:18 |
|
Subjunctive posted:I think he's asking about the storage type rather than the index type; the latter is size_t all the way IMO.
|
# ? Apr 8, 2014 19:26 |
|
Yes, sorry, I am talking about the items stored as the values of the array. I had forgotten that C++ was flexible in allowing customised array access. Actually the array is fixed in size. That might sound strange, I don't really want to discuss it further though because I figure it would not be proper to reveal details of the task.
|
# ? Apr 8, 2014 19:28 |
|
Hello again. So I have a dilemma, I am working on a Vigenere cypher and have almost got it complete, but for some reason combining a cypher letter of Z or z to any letter greater than G seems to give me a negative ASCII value (i.e. -128 when I add values 25 and 103). I flooded my code with print functions and can't see where it is going wrong. Everything else seems to be going great except when a Z is introduced in the argument and used to modify an input code letter greater than g. Upper-case G works great. Any advice on this would be hugely appreciated.code:
|
# ? Apr 8, 2014 19:30 |
|
A signed 1 byte integer (char) can only hold the values from -128 to 127
|
# ? Apr 8, 2014 19:49 |
|
astr0man posted:A signed 1 byte integer (char) can only hold the values from -128 to 127 So should I convert to an int and redefine it as a char after the encryption? Edit: It worked! Thanks! durtan fucked around with this message at 20:03 on Apr 8, 2014 |
# ? Apr 8, 2014 19:59 |
|
astr0man posted:A signed 1 byte integer (char) can only hold the values from -128 to 127 What makes you think char is a signed type?
|
# ? Apr 9, 2014 00:53 |
|
I think I tried to be a bit too clever. I have a function that accepts a std::ostream & as a parameter. I want to pass a std::ofstream to it. But I get linking error LNK2019. And a long error message that contains a lot of @'s for some reason. Is there a way to get this to work?... I can post more detail if it'd help. Disclosure: I am writing code for an evaluation exercise from a company to which I am applying for a job.
|
# ? Apr 9, 2014 02:33 |
|
shrughes posted:What makes you think char is a signed type? What makes you think char is an unsigned type?
|
# ? Apr 9, 2014 02:37 |
|
shrughes posted:What makes you think char is a signed type? IIRC it's signed on x86 (gcc/llvm/MSVC) and unsigned on ARM by default, because ARM didn't have a sign-extending byte load instruction. Regardless of char's signedness, though, char, unsigned char and signed char are distinct, and char isn't compatible with the other two. Yay C.
|
# ? Apr 9, 2014 02:39 |
|
Hammerite posted:I think I tried to be a bit too clever. I have a function that accepts a std::ostream & as a parameter. I want to pass a std::ofstream to it. But I get linking error LNK2019. And a long error message that contains a lot of @'s for some reason. Is there a way to get this to work?... I can post more detail if it'd help. I was mistaken about this, it has nothing to do with the ostream/ofstream thing. The error message made me think that's what it was, but it was actually just me forgetting to update a function declaration with a parameter I had added.
|
# ? Apr 9, 2014 02:56 |
|
Hammerite posted:Is there a name for this? As Otto said, the fact that you don't care about the relative ordering of only red or only blue means this is an unstable sort. The fact that a simple linear-time algorithm exists is because you have exactly two different keys (red and blue). For three or more keys this won't work and you have to use a general n log n algorithm or radix sort or similar. It's somewhat common in parameterised computer science problems that the problem with K=2 is somehow fundamentally "much easier" than the problem for any greater K (see also SAT, graph colouring). I don't know if there's a name for that particular phenomenon, though. seiken fucked around with this message at 03:16 on Apr 9, 2014 |
# ? Apr 9, 2014 03:04 |
|
shrughes posted:What makes you think char is a signed type? Sorry shrughes I guess I should have been more specific and said that in his specific case his machine was treating a variable declared as a char as a signed 8-bit integer and that it was overflowing. I hope you can forgive me for making this mistake.
|
# ? Apr 9, 2014 03:15 |
|
Hammerite posted:Is there a name for this? "Partition" is probably more accurate than "sort", here. See e.g. http://www.cplusplus.com/reference/algorithm/partition/.
|
# ? Apr 9, 2014 07:19 |
|
It's been a bit since I've used getline, so I'm a little lost as to why this isn't doing what I expect:C++ code:
|
# ? Apr 9, 2014 12:29 |
seiken posted:As Otto said, the fact that you don't care about the relative ordering of only red or only blue means this is an unstable sort. The fact that a simple linear-time algorithm exists is because you have exactly two different keys (red and blue). For three or more keys this won't work and you have to use a general n log n algorithm or radix sort or similar. It's somewhat common in parameterised computer science problems that the problem with K=2 is somehow fundamentally "much easier" than the problem for any greater K (see also SAT, graph colouring). I don't know if there's a name for that particular phenomenon, though. If you know the number of partitions beforehand, you should be able to implement an in-place algorithm running in O(n*(K-1)) time, n being length of array and K being number of partitions. Keep a table of "end of placed elements", one for each partition. For each element in the array, first to last: Find which partition the element belongs to. If this is the last partition, it's already in place. Otherwise, swap it into the one-past-last position for the appropriate partition, update the table of end-of-partition indexes. This places the first element of the following partition in the wrong location, swap that into position as well and update the partition indexes table, repeat until the last partition is updated. This should result in at most K-1 swaps for each element in the array. (If all elements are unique/a separate partition, this obviously degenerates to a simple n^2 sort.)
|
|
# ? Apr 9, 2014 14:50 |
|
FamDav posted:What makes you think char is an unsigned type?
|
# ? Apr 9, 2014 16:29 |
|
seiken posted:As Otto said, the fact that you don't care about the relative ordering of only red or only blue means this is an unstable sort. The fact that a simple linear-time algorithm exists is because you have exactly two different keys (red and blue). For three or more keys this won't work and you have to use a general n log n algorithm or radix sort or similar. It's somewhat common in parameterised computer science problems that the problem with K=2 is somehow fundamentally "much easier" than the problem for any greater K (see also SAT, graph colouring). I don't know if there's a name for that particular phenomenon, though. It generally arises when the most efficient way to solve K > 2 is by decomposing it into subproblems where K=2 and solving those. One example would be quicksort - sorting is essentially partitioning into K groups, where K = N. Quicksort works by partitioning with K=2, and then repeating the process on each partition until you get down to the individual elements.
|
# ? Apr 9, 2014 16:40 |
|
hooah posted:The first getline does what I'd expect, but the second never accepts input. Why is that? It's got the wrong method signature code:
The C++ faq may help, though this part is using the shift operators instead of getline.
|
# ? Apr 9, 2014 17:39 |
|
ExcessBLarg! posted:What makes you think that shrughes thinks char is an unsigned type? Only shrughes can settle this burning debate! Edison was a dick posted:It's got the wrong method signature Wrong getline. http://www.cplusplus.com/reference/string/string/getline/ It does consume the delimiter, FWIW. (I'm not sure what's wrong with the code at first glance.)
|
# ? Apr 9, 2014 17:55 |
|
|
# ? Jun 8, 2024 07:46 |
|
Subjunctive posted:I think he's asking about the storage type rather than the index type; the latter is size_t all the way IMO. Actually, the index type in C++ is ptrdiff_t (13.6.13).
|
# ? Apr 9, 2014 18:01 |