Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with



Grimey Drawer
Lets say I want to create something that allows me to create a timer overlay that draws itself on top of whatever happens to be open at the time? That is to say, I want to be able to hit say, Alt+F1 and no matter what is running, a timer shows up that counts down from 10 minutes in a given location.

How difficult would this be to do and how exactly would you go about it? A quick dance around Google suggests you need to write directly to the screen via lower-level hackery, which sound semi-messy. Is it possible to do something like, make a completely transparent window and just write it to that? (Or better yet, has it already been done?)

Adbot
ADBOT LOVES YOU

Fruit Smoothies
Mar 28, 2004

The bat with a ZING
So now I've started network programming, and I've got some questions that no tutorial I've found can answer. If you know any good ones, link me please! I've googled many c# searches, and even looked back to my pascal days where none of this seemed to be a problem. I guess the delphi components handled much of this for me.

I think understand the purpose of a (read) buffer, I'm just not entirely sure how it should be used. My understanding is something like this:

1) In a blocking example (which is threaded) the buffer's size doesn't matter. For example, if the data sent is 1024 bytes, and the buffer is 512, it will get filled, and them emptied, and then when no more data is available, the data will be joined. Correct?

2) In an aync system (which is what I'm messing around with)... The buffer size DOES matter. If incoming data is 1024 bytes, and the buffer is only 512, you can't guarantee control over combining those sections of data, right? Because only one read event is fired for the whole 1024 bytes?

PLEASE, PLEASE tell me if I'm wrong.

To test the async server, I've been using the demo from here and setting the server's buffer at something ridiculous like 8 bytes. Therefore, when the client sends > 8bytes, funny things happen.

I don't know if this is because the code was designed for a suitably large buffer, or because it's buggy, or what.

Thanks for your patience!

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Canine Blues Arooo posted:

How difficult would this be to do and how exactly would you go about it? A quick dance around Google suggests you need to write directly to the screen via lower-level hackery, which sound semi-messy. Is it possible to do something like, make a completely transparent window and just write it to that? (Or better yet, has it already been done?)

That should work. I'm assuming you're using Windows. A simple AutoHotKey script might be able to do this well. See the OSD example for details.

Look Around You
Jan 19, 2009

Fruit Smoothies posted:

So now I've started network programming, and I've got some questions that no tutorial I've found can answer. If you know any good ones, link me please! I've googled many c# searches, and even looked back to my pascal days where none of this seemed to be a problem. I guess the delphi components handled much of this for me.

I think understand the purpose of a (read) buffer, I'm just not entirely sure how it should be used. My understanding is something like this:

1) In a blocking example (which is threaded) the buffer's size doesn't matter. For example, if the data sent is 1024 bytes, and the buffer is 512, it will get filled, and them emptied, and then when no more data is available, the data will be joined. Correct?

2) In an aync system (which is what I'm messing around with)... The buffer size DOES matter. If incoming data is 1024 bytes, and the buffer is only 512, you can't guarantee control over combining those sections of data, right? Because only one read event is fired for the whole 1024 bytes?

PLEASE, PLEASE tell me if I'm wrong.

To test the async server, I've been using the demo from here and setting the server's buffer at something ridiculous like 8 bytes. Therefore, when the client sends > 8bytes, funny things happen.

I don't know if this is because the code was designed for a suitably large buffer, or because it's buggy, or what.

Thanks for your patience!

I don't know about C# in particular but with Berkeley Sockets in C (which probably is similar), you have send() and recv() to deal with data over TCP and sendto() and readfrom() for UDP.

When you send() data, it will block and upon completion, will return a value equal to the number of bytes that were actually sent. This value may be less than the number you told it to send for a number of reasons though (maybe you tried to send more than you could fit in a packet)! What you're supposed to do to send data is keep calling send() until you've sent everything that you want to. It looks roughly like the following (there may be/probably are bugs in it, I didn't actually test it):
code:
char *buf;
int buf_len;
int sfd; // socket to send to
// assume buf and buf_len are allocated and init'd by here
// also assume that sfd is set up by now too
int sent_now = 0; // how much we successfully sent this call
int sent_total = 0; // how much we've sent so far
char *send_pos = buf; // send_pos == buf right now, move as needed
do {
  int send_len = buf_len - sent_total; // try to send everything left
  char *send_pos = buf + sent_total; // start sending where we left off
  sent_now = send(sfd, send_pos, send_len, 0);
  sent_total += sent_now;
} while (sent_total != to_send);
recv()ing data is analogous, except that if you receive as many bytes as you asked for (recv_len == buf_len), you don't know whether there's more left in the buffer or not, so you need to call recv() again to try again. When recv() returns a number less than the originally requested number, you can assume you're done. Like as follows (another disclaimer about untested code and bugs here):

code:
// assume stuff is declared and set up and poo poo
int recvd_len = 0; // out here for scoping
do {
  recvd_len = recv(sfd, buf, buf_len, 0);
  if (recvd_len == buf_len) {
    /* copy buf to a larger buffer somewhere else or whatever to store the data */
  }
} while (recvd_len == buf_len);
Basically send() essentially fills a buffer that is read from when the corresponding recv() calls are made by the socket you're communicating to.

I don't honestly remember exactly what sendto() and recvfrom() do, but IIRC recvfrom() will clear the 'input buffer' or whatever that the socket has, even if you couldn't read the entire sent message (that is, if the number recvd_len is == buf_len). sendto() basically just throws all of the data on the network at the intended recipient and forgets about it.

Here is a good guide on socket programming (it's for C but it gives you an idea of how it works).

Scaevolus
Apr 16, 2007

Internet Janitor posted:

...I really think I mention Forth too frequently.
Forth's implicit arity bothers me-- to understand data flow, you have to know the stack effects of every word.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
I haven't used Forth, but I've used Factor, and it wasn't that bad. How do the two.... stack up? (I'm so sorry).

Fruit Smoothies
Mar 28, 2004

The bat with a ZING

Look Around You posted:

Here is a good guide on socket programming (it's for C but it gives you an idea of how it works).

When I looked at Visual Studio C++, I came across those lower level functions and saw examples using them. I was advised in this thread to use .NET and C# which has higher level socket objects. Using blocking sockets is "easy" for the reasons shown. You just keep listening until all the data is obtained. It's mostly the async side I'm interested in, because it's not something I've ever dealt with before.

Look Around You
Jan 19, 2009

Fruit Smoothies posted:

When I looked at Visual Studio C++, I came across those lower level functions and saw examples using them. I was advised in this thread to use .NET and C# which has higher level socket objects. Using blocking sockets is "easy" for the reasons shown. You just keep listening until all the data is obtained. It's mostly the async side I'm interested in, because it's not something I've ever dealt with before.

I agree that C# is probably the better choice. In any case, at least with the low level functions, reading any amount of data on a UDP socket will clear the buffer that got filled by the sending thing. So if you're sent 1024 bytes on UDP and only recvfrom 512 bytes, the last 512 are gone forever, while in TCP it'll stay in the buffer until you read it all.

ComptimusPrime
Jan 23, 2006
Computer Transformer
The buffer size matters insomuch that the system will only read bytes from the stream until the buffer is filled.

In a blocking system, setting the buffer size to a set amount and then going through the steps allows you to perform Socket IO with multiple threads in a process: Read Till Buffer is Full -> Do something with the read data from the buffer -> Yield control of the thread to another waiting task -> Resume to beginning of while loop

Honestly, async IO is not exactly the easiest concept to understand, but it would be absolutely useless if you only got one callback for the entire block of data that was sent. Here are the basic steps that you follow when reading Async data:
Create a buffer
Call BeginReceive with your buffer and a callback function that will do something meaningful when some amount of data is read....

I am not going to continue as MSDN has amazingly thorough documentation of the whole process:
For a client
For a server

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."

Scaevolus posted:

Forth's implicit arity bothers me-- to understand data flow, you have to know the stack effects of every word.

This can be a real issue, but there are ways to mitigate the problem. For anything longer than a few lines I always include stack diagrams clarifying the stack effects of a word. Of course, if these aren't verified by the compiler you've in effect invented a new form of hungarian notation and inaccurate documentation can lie to you. There are language extensions which make stack effect diagrams mandatory and statically verified. In general though, data flow isn't that bad to keep track of if you arrange your commutative expressions to minimize the number of elements on the stack at any given time and avoid goofy variable-effect words. If the program is well-factored the stack effects of words should be unsurprising and the overall code should be concise enough to look over in detail.

Suspicious Dish posted:

I haven't used Forth, but I've used Factor, and it wasn't that bad. How do the two.... stack up? (I'm so sorry).

Forth is a low-level systems programming language that is essentially a glorified assembler with a simple subroutine calling convention and the ability to add new syntax or compile-time behaviors. This means you can build it out into whatever language is most appropriate for your task. Factor is more of an applications-level language with features like garbage collection and object orientation baked in. Factor gives you more capabilities out of the box and is much "safer" (for example, like StrongForth it statically verifies your stack effects), but it has a great optimizing compiler and is wicked fast. Factor also replaces most stack-twiddling words with stack combinators that distribute arguments and apply them to anonymous words stored on the stack, leading to a more functional style of programming.

If I wanted to write a web app in a concatenative language, Factor would be the obvious choice. If I want to make a program for an embedded device or some strange new architecture, Forth is easy to port and reason about at a fine-grained level. Much of my interest in Forth stems from the fact that it's sort of a "stone soup" language where you can extend it however you like from a few basic principles, and I use it as a playground for experimenting with compiler techniques and language design. It's also pretty fantastic for writing retro-style video games, as it is low-level enough for writing game engines but flexible enough to be a good scripting language.

Internet Janitor fucked around with this message at 21:32 on Apr 5, 2012

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I have a vague memory of reading an article or a blog post talking about the phenomenon where people become used to a "feature" that isn't really a feature but just a side effect of the way the software was implemented. Then when you do a rewrite you're stuck with either people complaining or having to specifically code a feature that was never an explicitly designed feature to begin with.

Does this ring a bell to anyone? I can't seem to find it or remember any more specifics about who/what/when/where

Internet Janitor
May 17, 2008

"That isn't the appropriate trash receptacle."
Thermopyle: That sounds interesting, and I can think of a few instances where I've run into that sort of behavior. Can you remember any concrete examples from the article?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Internet Janitor posted:

Thermopyle: That sounds interesting, and I can think of a few instances where I've run into that sort of behavior. Can you remember any concrete examples from the article?

Not really.

I was just reminded of this by someone in the Windows thread. I'm not positive this is an example of the phenomena, but he didn't like that in Windows 7 if you drag/drop a group of files into an Explorer window, Windows automatically sorted those files in that window. In XP when you did this, it grouped all the files at the bottom of the list until you manually sorted the window.

This sounded like it could be an example of this sort of thing...

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
Windows 7 still does that when you save a file. It shows up at the bottom, disregarding your sorting. Hit F5 and it sorts.

Also, I wonder if they'll ever fix the system tray icons sticking around after the process exits. Move your mouse over the icons and watch them flyyyyy

Plorkyeran
Mar 22, 2007

To Escape The Shackles Of The Old Forums, We Must Reject The Tribal Negativity He Endorsed
Probably not. The icons aren't directly associated with a creating process, so there's no way to check if the process still exists without risking unswapping it if it does.

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe

Thermopyle posted:

Not really.

I was just reminded of this by someone in the Windows thread. I'm not positive this is an example of the phenomena, but he didn't like that in Windows 7 if you drag/drop a group of files into an Explorer window, Windows automatically sorted those files in that window. In XP when you did this, it grouped all the files at the bottom of the list until you manually sorted the window.

This sounded like it could be an example of this sort of thing...

I remember someone, maybe Raymond Chen, writing a blog post about this and how it was changed due to user feedback.

Zhentar
Sep 28, 2003

Brilliant Master Genius

rolleyes posted:

I might be opening myself up to abuse here, but if you intend to stick to high-level languages (which is becoming the norm these days for most business applications) then, personally, I don't think learning C has a huge amount of benefit. The primary lessons you learn in C are that memory management is difficult to do correctly (unchecked buffers, unbounded arrays, etc) and some specifics about the architecture on which you're running - neither of which are applicable to garbage-collected HLLs like Java or C#. There are certainly other lessons you can learn while doing C but I would think you could learn them in Java/C#/whatever as well.

<snip>

edit:
Also, regarding pointers and C#, yes they exist but most of the time you don't need to know about them. Out/ref aren't exposed to the user as pointers in any real sense and you can explain what they do without using the word 'pointer'. Actual pointers exist only in unsafe code regions and certainly as someone learning the language you'd need an extremely good reason to be using them. The nearest direct managed equivalent are delegates (effectively function pointers).

I will kind of back you up here. You can enjoy a life of steady employment and pretty decent income as a mediocre business programmer without ever learning C or understand pointers and all of that. You can even be a very successful developer (since there's a lot more to development than just programming). It may exclude you from some fields, and there are some costs to it (you're certainly going to be lost if you try to load up a crash dump in a debugger), but it's not the end of the world by any stretch.

ComptimusPrime
Jan 23, 2006
Computer Transformer
Yeah... why don't we just keep on encouraging people toward mediocrity instead of nudging them toward understanding their field more thoroughly.

Being a good problem solver and thinker is more important than knowing how a computer system works deep down, but knowing how a computer works will also allow you to understand the potential solutions better.

Of course, I always err on the side of telling people to aim higher rather than lower.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

ComptimusPrime posted:

Yeah... why don't we just keep on encouraging people toward mediocrity instead of nudging them toward understanding their field more thoroughly.

Being a good problem solver and thinker is more important than knowing how a computer system works deep down, but knowing how a computer works will also allow you to understand the potential solutions better.

Of course, I always err on the side of telling people to aim higher rather than lower.
I don't know enough to directly address the question, but I will say that people have limited resources. There's not enough time to learn everything I think would make me better.

Maybe someone would better meet their goals of building product X or making Y dollars by learning a Python framework or whatever instead if C.

pokeyman
Nov 26, 2006

That elephant ate my entire platoon.
The way I see it, if you're trying to make a living writing code, most of what you write will eventually be either written in C, call into something written in C (the OS, the language runtime, some library you're using), or run on something written in C (a virtual machine, an interpreter). It's really hard to ignore.

Carthag Tuek
Oct 15, 2005

Tider skal komme,
tider skal henrulle,
slćgt skal fřlge slćgters gang



If you're setting up a Varnish cache, the config files can contain straight up C code in them (and it's practically necessary if you want to do fancy things). It gets compiled and linked when you start it up or use varnishadmin to reload or replace the config while running. I could probably post some of the the horrible configs I made in the horrors thread if I still had them.

karms
Jan 22, 2006

by Nyc_Tattoo
Yam Slacker

pokeyman posted:

The way I see it, if you're trying to make a living writing code, most of what you write will eventually be either written in C, call into something written in C (the OS, the language runtime, some library you're using), or run on something written in C (a virtual machine, an interpreter). It's really hard to ignore.

And yet you never have to touch any of it! Amazing.

Bruegels Fuckbooks
Sep 14, 2004

Now, listen - I know the two of you are very different from each other in a lot of ways, but you have to understand that as far as Grandpa's concerned, you're both pieces of shit! Yeah. I can prove it mathematically.

KARMA! posted:

And yet you never have to touch any of it! Amazing.

Except when something breaks and the debugger pops up with crazy poo poo that's not in "your" code. Your boss isn't going to be "jolly well then, it's not your fault, can't be helped."

Johnny Cache Hit
Oct 17, 2011

hieronymus posted:

Except when something breaks and the debugger pops up with crazy poo poo that's not in "your" code. Your boss isn't going to be "jolly well then, it's not your fault, can't be helped."

Yeah actually I doubt most managers would expect, for example, a PHP developer to dive deep into PHP internals to fix a bug.

I mean there are good reasons to know low-level languages, and understanding how/why your tools break is definitely one of them. But saying "if you can't debug and patch CPython you can never be a good Python developer :smug:" is pretty ridiculous.

The Gripper
Sep 14, 2004
i am winner

Kim Jong III posted:

Yeah actually I doubt most managers would expect, for example, a PHP developer to dive deep into PHP internals to fix a bug.

I mean there are good reasons to know low-level languages, and understanding how/why your tools break is definitely one of them. But saying "if you can't debug and patch CPython you can never be a good Python developer :smug:" is pretty ridiculous.
Depends on the position, if you're working in a development house your manager will likely be as you said, if you're working as a developer in some other kind of business where you're just a tool to solve a specific problem then yeah, saying "our third party library sucks rear end" probably isn't going to get a better reaction than "so?".

Developers dream job vs. "typical crappy developer job", I guess.

Chamelaeon
Feb 29, 2008

Kim Jong III posted:

Yeah actually I doubt most managers would expect, for example, a PHP developer to dive deep into PHP internals to fix a bug.

I mean there are good reasons to know low-level languages, and understanding how/why your tools break is definitely one of them. But saying "if you can't debug and patch CPython you can never be a good Python developer :smug:" is pretty ridiculous.

It's not a question of being able to patch the bug. If you can't understand how the underlying code works then you can't make an effective workaround. You can suggest the use of a different non-broken library, but that may not be an option depending on how much legacy crud you're trucking around (or other factors in the environment).

The Gripper posted:

Depends on the position, if you're working in a development house your manager will likely be as you said, if you're working as a developer in some other kind of business where you're just a tool to solve a specific problem then yeah, saying "our third party library sucks rear end" probably isn't going to get a better reaction than "so?".

Developers dream job vs. "typical crappy developer job", I guess.

Even in an amazing dev shop, you have a job to do. If a tool is standing between me and my assigned task that might be the tool's fault but unless it's a fairly catastrophic failure I still have to get my task done. That means I've got to work around it. If I don't understand the lower-level context of the tool, that makes my job harder. My manager might give me time to go get familiar with that context, but he's not going to write off the task entirely.

NinjaDebugger
Apr 22, 2008


Chamelaeon posted:

Even in an amazing dev shop, you have a job to do. If a tool is standing between me and my assigned task that might be the tool's fault but unless it's a fairly catastrophic failure I still have to get my task done. That means I've got to work around it. If I don't understand the lower-level context of the tool, that makes my job harder. My manager might give me time to go get familiar with that context, but he's not going to write off the task entirely.

Gotta agree here, few are the times that a low level bug has resulted in us switching tools or changing or removing a feature. It's pretty much find a workaround or get out.

Zhentar
Sep 28, 2003

Brilliant Master Genius

hieronymus posted:

Except when something breaks and the debugger pops up with crazy poo poo that's not in "your" code. Your boss isn't going to be "jolly well then, it's not your fault, can't be helped."

No, he'll open a trouble ticket with your vendor.

I'm one of two people on my team who's comfortable opening things up in WinDbg and figuring things out, and frankly, that's one person more than we actually need. Not that I don't value having the skill and understanding, but it's only one of many important skills.

The Gripper
Sep 14, 2004
i am winner

Chamelaeon posted:

Even in an amazing dev shop, you have a job to do. If a tool is standing between me and my assigned task that might be the tool's fault but unless it's a fairly catastrophic failure I still have to get my task done. That means I've got to work around it. If I don't understand the lower-level context of the tool, that makes my job harder. My manager might give me time to go get familiar with that context, but he's not going to write off the task entirely.
Oh I don't disagree with that, but sometimes (I want to say a majority of the time) the best solution isn't going to be "dig into code you don't know and make changes". If it's old unmaintained code and you know that you can fix it with no side-effects, go ahead and fix it. If it's currently maintained, report the issue to the maintainer and hopefully get a relevant fix/workaround in a timely manner. The knee-jerk reaction shouldn't be "dudes a developer, should know C and be able to fix everything".

I'd also be extremely wary of making changes to any code that you don't have intimate domain knowledge of, regardless of coding proficiency. See: all the stories of broken crypto, network and general security implementations scattered all over the internet.

Anyway that's neither here or there about whether knowing C is crucial; I honestly have never had to touch it for work, even though I know it. If something broke and we had the code for it my first steps would always be to contact the vendor/maintainer, and every time either we had a workaround to apply or a developer has suggested an alternative that doesn't suffer from it (almost always newer, replacing a decade old library or tool) before having to resort to debugging it myself.

I generally work with interpreted languages, as well as C#, so my workplaces have generally been free of C/C++ in development entirely which I presume affects how useful C knowledge has been to me.

Computer viking
May 30, 2011
Now with less breakage.

Internet Janitor posted:

Forth is a low-level systems programming language that is essentially a glorified assembler with a simple subroutine calling convention and the ability to add new syntax or compile-time behaviors. This means you can build it out into whatever language is most appropriate for your task. Factor is more of an applications-level language with features like garbage collection and object orientation baked in. Factor gives you more capabilities out of the box and is much "safer" (for example, like StrongForth it statically verifies your stack effects), but it has a great optimizing compiler and is wicked fast. Factor also replaces most stack-twiddling words with stack combinators that distribute arguments and apply them to anonymous words stored on the stack, leading to a more functional style of programming.

Regarding Forth: I found this an interesting enough read.

YosefK posted:

This is a personal account of my experience implementing and using the Forth programming language and the stack machine architecture. “Implementing and using” - in that order, pretty much; a somewhat typical order, as will become apparent.

It will also become clear why, having defined the instruction set of a processor designed to run Forth that went into production, I don’t consider myself a competent Forth programmer (now is the time to warn that my understanding of Forth is just that - my own understanding; wouldn’t count on it too much.)

The Gripper
Sep 14, 2004
i am winner
Can anyone give any insight into how to make bitwise left shift in python/Go/whatever work the same way as it does in Java/C#/Probably a ton of others? I'm trying to port something from Scala to Python and some of the arithmetic performed is apparently completely impossible to 1:1 copy, and I don't know enough about bit arithmetic to know of any obvious alternative.

Example:
code:
# in scala
scala> 196L<<56
res12: Long = -4323455642275676160

# in python
>>> 196<<56
14123288431433875456L
# result is 1-bit longer than can fit in a signed int64 (sans sign) (also it's not the right value for this app anyway)
>>> 14123288431433875456L.bit_length()
64
I guess I need a way to do the shift without overflowing a 64-bit signed integer (or sanely overflowing), but have no idea whether this is actually possible at all. From reading up on pythons implementation it looks like it just assumes the number is represented by an infinite number of bits so a left shift always increases bit_length() by 1 with no chance of overflow.

Also I'm pretty sure wanting an overflow is a pretty dumb idea, but this is in some dumb crypto method so I'm assuming it has some mathematical significance (so it isn't just me wanting to do something for the sake of it).

ToxicFrog
Apr 26, 2008


The Gripper posted:

Can anyone give any insight into how to make bitwise left shift in python/Go/whatever work the same way as it does in Java/C#/Probably a ton of others? I'm trying to port something from Scala to Python and some of the arithmetic performed is apparently completely impossible to 1:1 copy, and I don't know enough about bit arithmetic to know of any obvious alternative.

Example:
code:
# in scala
scala> 196L<<56
res12: Long = -4323455642275676160

# in python
>>> 196<<56
14123288431433875456L
# result is 1-bit longer than can fit in a signed int64 (sans sign) (also it's not the right value for this app anyway)
>>> 14123288431433875456L.bit_length()
64
I guess I need a way to do the shift without overflowing a 64-bit signed integer (or sanely overflowing), but have no idea whether this is actually possible at all. From reading up on pythons implementation it looks like it just assumes the number is represented by an infinite number of bits so a left shift always increases bit_length() by 1 with no chance of overflow.

Also I'm pretty sure wanting an overflow is a pretty dumb idea, but this is in some dumb crypto method so I'm assuming it has some mathematical significance (so it isn't just me wanting to do something for the sake of it).

What's happening in Scala is that it's doing this:

code:
00000000 00000000 00000000 00000000 00000000 00000000 00000000 11000100 << 56
11000100 00000000 00000000 00000000 00000000 00000000 00000000 00000000
and then interpreting the resulting bit pattern as a 64-bit signed integer in 2's complement, resulting in a value of -4323455642275676160.

Python, meanwhile, simply multiplies it by 256 and tacks on bits as needed to store the result.

You can get the right answer by checking if the result would have been negative in 2's complement (which is the case iff the high bit is 1), and subtracting (1 << width) if so:

code:
>>> def lshift2c(n, shift, width=64):
...     n = (n << shift) % (1 << width)
...     if n.bit_length() == width:
...             # high bit is 1, result should be negative
...             n = n - (1 << width)
...     return n
... 
>>> lshift2c(196, 56)
-4323455642275676160L
>>> lshift2c(196, 55)
7061644215716937728
That said, I'd be surprised if Python didn't have a library specifically for working with fixed-length bit strings, which is really what's happening here - the fact that they're represented as numbers is incidental.

Sil
Jan 4, 2007
I'm not sure if this counts as programming, but I couldn't track down a hardware DIY thread: I want to control a bunch of LEDs out of a USB port. I'm not really sure where to even start. I want to be able to write a program that given some input activates a number of 40-50 LEDs in a certain pattern. I want to use individual LEDs/not a sign board.

The main thing I can't figure out is how to get the LED wires usefully connected to some intermediate thing that then can be controlled by USB input. Basically I am confused.

The Gripper
Sep 14, 2004
i am winner

ToxicFrog posted:

That said, I'd be surprised if Python didn't have a library specifically for working with fixed-length bit strings, which is really what's happening here - the fact that they're represented as numbers is incidental.
Thanks for the info, I figured it'd be something like that but was just flailing around trying combos of shifts to get to the result I wanted. Also I'd be surprised if there wasn't something in numpy or whatever that deals with it since it seems like something that would come up often, but it's surprisingly difficult to search for and get any decent results (infact one of the results I found was Guido in a mailing list from 1992 talking about implementing shifts the way they are now).

PDP-1
Oct 12, 2004

It's a beautiful day in the neighborhood.

Sil posted:

I'm not sure if this counts as programming, but I couldn't track down a hardware DIY thread: I want to control a bunch of LEDs out of a USB port. I'm not really sure where to even start. I want to be able to write a program that given some input activates a number of 40-50 LEDs in a certain pattern. I want to use individual LEDs/not a sign board.

The main thing I can't figure out is how to get the LED wires usefully connected to some intermediate thing that then can be controlled by USB input. Basically I am confused.

Release the Magic Blue Smoke, it's the Learning Electronics MEGATHREAD could probably help you out there.

Sil
Jan 4, 2007

PDP-1 posted:

Release the Magic Blue Smoke, it's the Learning Electronics MEGATHREAD could probably help you out there.

Wow, thank you, didn't even know that forum existed.

rolleyes
Nov 16, 2006

Sometimes you have to roll the hard... two?

Sil posted:

Wow, thank you, didn't even know that forum existed.

I reckon this would make a good Arduino (or similar) project. Either way, the electronics megathread is probably the best place to answer your hardware queries.

The Gripper
Sep 14, 2004
i am winner

rolleyes posted:

I reckon this would make a good Arduino (or similar) project. Either way, the electronics megathread is probably the best place to answer your hardware queries.
Arduino would be a great choice since it has a solid USB serial interface as well, so you can keep controlling it from the PC instead of entirely on-board. Lighting up 50 LEDs will be a challenge though (individually), since you'll definitely need some darlington arrays and a bunch of shift registers to switch on one column and one row at a time (rather than sending the signal on one wire to turn one LED on).

It'll be fun though!

Hammerite
Mar 9, 2007

And you don't remember what I said here, either, but it was pompous and stupid.
Jade Ear Joe
The only web scripting language I know is PHP. PHP has a great many flaws, and I should like to learn another web scripting language. Probably Python or Ruby. There is one thing that keeps me from getting comfortable with the way other languages do things, though. When I invoke a PHP script I know exactly what is happening. PHP starts at the top of the script and executes the contents, in order, until it reaches the end of the script. I have seen forums poster tef refer to this as "CGI-style" scripting (or maybe I have misunderstood terms, but that is how I perceived it).

This does not seem to be the way that other web scripting languages work, perhaps with the exception of Perl. They seem to typically involve installing things (you need to install such-and-such a "gem"...), declaring things (tell some program to start listening to requests or dealing with them using a specific workflow), configuring toolsets (for example, a templating engine) that are to be pieced together to form the backbone of the application, choosing the way something should communicate with a web server (python.org's introductory page on "how to use Python in the web" is mostly a load of incomprehensible bullshit about different methods for hooking Python code up to a webserver - what if I just want to write a script?), starting and stopping processes (why can't a request to a given script just invoke the necessary interpreter which then does its thing, like in PHP?), who knows what else.

Please offer tips on how I can become more comfortable with these other web scripting languages' ways of approaching scripting.

nb. I have shared web hosting, but I do not have shell access or any of that good stuff. I could spring for a cheap VPS, but it seems like it should not be necessary to do that just in order to do some tinkering.

Adbot
ADBOT LOVES YOU

Look Around You
Jan 19, 2009

Hammerite posted:

The only web scripting language I know is PHP. PHP has a great many flaws, and I should like to learn another web scripting language. Probably Python or Ruby. There is one thing that keeps me from getting comfortable with the way other languages do things, though. When I invoke a PHP script I know exactly what is happening. PHP starts at the top of the script and executes the contents, in order, until it reaches the end of the script. I have seen forums poster tef refer to this as "CGI-style" scripting (or maybe I have misunderstood terms, but that is how I perceived it).

This does not seem to be the way that other web scripting languages work, perhaps with the exception of Perl. They seem to typically involve installing things (you need to install such-and-such a "gem"...), declaring things (tell some program to start listening to requests or dealing with them using a specific workflow), configuring toolsets (for example, a templating engine) that are to be pieced together to form the backbone of the application, choosing the way something should communicate with a web server (python.org's introductory page on "how to use Python in the web" is mostly a load of incomprehensible bullshit about different methods for hooking Python code up to a webserver - what if I just want to write a script?), starting and stopping processes (why can't a request to a given script just invoke the necessary interpreter which then does its thing, like in PHP?), who knows what else.

Please offer tips on how I can become more comfortable with these other web scripting languages' ways of approaching scripting.

nb. I have shared web hosting, but I do not have shell access or any of that good stuff. I could spring for a cheap VPS, but it seems like it should not be necessary to do that just in order to do some tinkering.

Most modern web frameworks use a Model-View-Controller (MVC) pattern. The three parts are as follows: a "Model", which contains the representation of the data and the rules for manipulating it, a "View" for displaying it to the user, and a "Controller" that decides what needs to be done and how to do it, basically mediating between the Model and the VIew.

So you'll have a Controller get the request (maybe "yoursite.com/blog/1") and decide what to do with it, usually by calling a function that you "route" that request to. That function (still part of the controller) will then handle getting the data from the model, and pass it to the view (which is usually a special template file, typically html with some code in it to do some display logic).

Here is a good beginners tutorial for the Ruby on Rails (which is obviously written in Ruby)

here is an overview and a tutorial for the Django framework, which is written in Python.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply