|
I have this simple problem that I just can't figure out. I have a 256 square pixel image. The left and right borders corresponds to a longitude value, and the top abd bottom edge corresponds to a lattitide value. Basically the 256 square image is a rectangle laid onto a map. If you have a lat/long value that lied inside the rectangle, how do you get the pixel coordinates of that point? I've been messing with this all morning, but I just can't seem to find a formula that works.
|
# ? May 29, 2009 03:57 |
|
|
# ? May 15, 2024 01:59 |
|
code:
|
# ? May 29, 2009 03:59 |
|
Can anyone reccomend a book for moving from c# to c++?
|
# ? May 29, 2009 04:29 |
|
Avenging Dentist posted:
the only problem is that doesn't even begin to work anywhere but the north eastern hemisphere.
|
# ? May 29, 2009 05:17 |
|
Let's start with the basics. 1) The Earth is a sphere. 2) A 256-pixel square image is not a sphere. 3) You need to pick a projection. 4) Come back when you've got one.
|
# ? May 29, 2009 05:26 |
|
nbv4 posted:the only problem is that doesn't even begin to work anywhere but the north eastern hemisphere. Um no. Buenos Aires, Argentina: Latitude -34° Longitude -58° Somewhat northeast of Buenos Aires: Latitude -24° Longitude -48° code:
Avenging Dentist fucked around with this message at 05:40 on May 29, 2009 |
# ? May 29, 2009 05:32 |
|
rjmccall posted:Let's start with the basics. Given small enough sections of the globe, you can use a sphere's local Euclidean-ness to ignore error caused by deformation.
|
# ? May 29, 2009 05:33 |
|
Oops
|
# ? May 29, 2009 05:40 |
|
nbv4 posted:the only problem is that doesn't even begin to work anywhere but the north eastern hemisphere. Assuming your projections are small enough and low-latitude enough that the local projection is close enough to a square, it should work everywhere. The only place you have to be particularly clever is if you take a projection that crosses the 180th longitude, in which case you need to normalize so both edges are either positive or negative.
|
# ? May 29, 2009 05:40 |
|
Avenging Dentist posted:Given small enough sections of the globe, you can use a sphere's local Euclidean-ness to ignore error caused by deformation. My assumption was that he was fretting over formulas because he wanted/needed to solve the problem more generally. If he actually just couldn't figure out how to handle wraparound for conformal cylindrical, well.
|
# ? May 29, 2009 05:55 |
|
nevermind
nbv4 fucked around with this message at 07:04 on May 29, 2009 |
# ? May 29, 2009 05:57 |
|
Avenging Dentist posted:Given small enough sections of the globe, you can use a sphere's local Euclidean-ness to ignore error caused by deformation. You could also project it into hyperbolic space and rely on the global Non-Euclidean-ness to return approximate values near the edges.
|
# ? May 29, 2009 05:57 |
|
nbv4 posted:I'm using the Mercator projection that google maps uses. When I use that formula, I get this: That image is really unhelpful for diagnosing your problem. Can you post the specific numbers you're using for a single image and point, the calculation you're doing, the result you get, and the result you'd expect instead?
|
# ? May 29, 2009 06:04 |
|
Do you have some sort of misplaced fetish for abs? I have no idea what you think you're doing with that calculation.code:
|
# ? May 29, 2009 07:09 |
|
rjmccall posted:Do you have some sort of misplaced fetish for abs? I have no idea what you think you're doing with that calculation. code:
|
# ? May 29, 2009 08:08 |
|
Rusty_ posted:Can anyone reccomend a book for moving from c# to c++? Accelerated C++.
|
# ? May 30, 2009 00:20 |
|
I want to learn HLSL for creating shaders as a technical artist, how should I go about doing this? The HLSL intro tutorials like http://www.neatware.com/lbstudio/web/hlsl.html all seem to assume prior knowledge of alot of programming concepts. Should I learn another programming language first? Or does anyone a HLSL guide more angled at beginners? I don't have any programming knowledge at the moment.
|
# ? May 31, 2009 21:51 |
|
Modern RDBMSes are based on arrays, right? So the data is stacked together, side by side as columns, and then rows, and it makes access really fast... But what if we had a database that was more suited for like... Trees? Like HTML or XML documents. How does that get represented as a stream of bytes in memory or on a disk? I'm thinking a linked list of some sort could solve the problem in some abstract way, but then the data isn't side by side anymore and makes lookups really slow. How are databases like this implemented?
|
# ? Jun 1, 2009 15:05 |
|
Triple Tech posted:Modern RDBMSes are based on arrays, right? So the data is stacked together, side by side as columns, and then rows, and it makes access really fast... Not really, no. Most of the high-speed databases are a variation on something like a B-Tree. Indexes are stored separately, and allow constant-time access to any record, as long as you are keying the access on something you bothered to index ahead of time. Triple Tech posted:But what if we had a database that was more suited for like... Trees? Like HTML or XML documents. How does that get represented as a stream of bytes in memory or on a disk? I'm thinking a linked list of some sort could solve the problem in some abstract way, but then the data isn't side by side anymore and makes lookups really slow. How are databases like this implemented? Storing trees in a relational database with left/right indices isn't that bad, and has basically all the properties you want out of a trivial tree. For markup like XML or HTML, you could convert it to canonical tree form and store that, but in a traditional database it wouldn't work particularly well because each node is so small, and doing anything interesting involves touching a lot of nodes, which would mean a lot of back-and-forth. More likely would be to precompile a parse tree for your specific application, and store the parse tree in the database in document-per-record form, or at least transactional-group-per-record.
|
# ? Jun 1, 2009 15:15 |
|
Triple Tech posted:Modern RDBMSes are based on arrays, right? So the data is stacked together, side by side as columns, and then rows, and it makes access really fast... There are column-oriented databases that optimize for pulling lots of values for a single attribute across many records (like you have a lot of accounts and you just want the names of each but don't care about all the other details). Some (many?) of them go so far as to arrange the data on disk in such a way that a sequential read from disk goes down a single column instead of across a row.
|
# ? Jun 1, 2009 16:02 |
|
A vague question, because I'm still forming my thoughts about this: We have a number of large datasets that it would be useful to query remotely. The obvious old-school way would be to stick them in a database and open a hole in our firewall for traffic. I'm dissatisfied by this because I'd rather the queries and data were at a higher level (e.g. returning objects not records), I don't want "the other end" to have to worry about implementation details (e.g. "they're using mySQL with this schema") and so on. So I've started thinking about a data server that can be connected to and queried over over the web. So: * What's the prior art for this? Of course there's geospatial data, and there must be some huge atmospheric datasets that can be queried. Obviously a plain vanilla webservice might do, but if we need security (which is likely, as some data will not be available to everyone) a persistent connection would be useful. * Similarly, what sort of art is there for query languages? SQL is an obvious starting point, but while SQL may be what's doing the work behind the scenes, we don't need the full richness of that at the user end, and writing our own general SQL implementation would be unnecessary.
|
# ? Jun 1, 2009 17:11 |
|
How expressive do you want access to your data? There's mostly fetch_by_id and fetch_all. Anything else, given customer intervention, is doomed to slowly reinvent parts of SQL.
|
# ? Jun 1, 2009 17:30 |
|
I just installed SQL 08 with BIDS. when I go to create a new project, I'm missing "Report Server Project" and a couple others. What did I mess up? EDIT: I didn't mess up anything, apparently. Microsoft just decided that those templates needed to be under a different heading in 2008 as opposed to 2005. Carry on. ZentraediElite fucked around with this message at 21:05 on Jun 1, 2009 |
# ? Jun 1, 2009 20:55 |
|
Triple Tech posted:How expressive do you want access to your data? There's mostly fetch_by_id and fetch_all. Anything else, given customer intervention, is doomed to slowly reinvent parts of SQL. More expressive - say "fetch all X with a modification date between Y and Z or a title that includes A or a location that is B". That's what we have at the moment implemented via webforms (and via SQL underneath). Your point about reinventing SQL is well taken - one of the reasons why I want to see what else has been done before.
|
# ? Jun 2, 2009 08:22 |
|
This is more of a general design question about indexes for objects. I have a list of units with an id/unit type number from 1-15, so I have 15 units. Currently I have a massive mix of ways to put those units into vectors/arrays. 1. Use a filler 0 elements. Array size=16, type==index position. For loops and stuff look a bit crappy. Getting Array from sql database is nicer. 2. Use no filler. Array size=15. type-1==index. For loops are nices. Getting sql database isn't as nice. 3. Have unit types starting from 0-14. Seems alright but I'm not too sure about having an element with type 0. Functions would return 0, for the first unit, which I'm not too hot about. Otherise it seems like an option. I've currently got a mix of 1 and 2, and I assume it's best to change to a single style, even if it's not as natural in some instances. So for Technology things are different and a different style from units is more natural. Maybe there is another option.
|
# ? Jun 2, 2009 14:09 |
|
outlier posted:More expressive - say "fetch all X with a modification date between Y and Z or a title that includes A or a location that is B". That's what we have at the moment implemented via webforms (and via SQL underneath). SELECT * FROM 'X' WHERE mod_date BETWEEN y AND z OR title LIKE "%A" OR location = B; ? If you go very far along this path, you're certainly going to end up re-implementing SQL. SQL is exactly as expressive as you need it to be. Simple queries are (in the general case) fairly simple to write and if, as you say, you have no need for anything more complicated than the above, you should be fine and don't need to get into the more cryptic features.
|
# ? Jun 2, 2009 14:10 |
|
Unparagoned posted:This is more of a general design question about indexes for objects. I have a list of units with an id/unit type number from 1-15, so I have 15 units. Currently I have a massive mix of ways to put those units into vectors/arrays. There is the object oriented approach, so you could have something like this: code:
|
# ? Jun 2, 2009 14:47 |
|
Triple Tech posted:Modern RDBMSes are based on arrays, right? So the data is stacked together, side by side as columns, and then rows, and it makes access really fast... The closest thing to what you're asking that I'm aware of is indexing XML documents into relational databases, mainly done for scaling up the search of XML documents. Naturally, this is much faster and uses less memory than building up a tree and searching that. It isn't a terribly elegant approach, but considering the years of work done in databases, leveraging a seasoned field to improve search is a pretty killer idea.
|
# ? Jun 3, 2009 03:30 |
|
Triple Tech posted:Modern RDBMSes are based on arrays, right? So the data is stacked together, side by side as columns, and then rows, and it makes access really fast... Do you really think "side by side" storage is faster than a lookup tree? There are db systems optimized for frequent massive non-conditional reads, in which case sequential storage works, but trees are pretty much default for storage in general. Read up b-trees and filesystems. If you're thinking of an XML-geared db maybe you want something like this? Fake edit: it's programmers like you that made fat32 fragmentation a problem.
|
# ? Jun 3, 2009 07:20 |
|
nasoren posted:Do you really think "side by side" storage is faster than a lookup tree? There are db systems optimized for frequent massive non-conditional reads, in which case sequential storage works, but trees are pretty much default for storage in general. Read up b-trees and filesystems. Berkeley DB: when you absolutely need your program to deadlock as much as possible.
|
# ? Jun 3, 2009 07:25 |
|
Steve French posted:Berkeley DB: when you absolutely need your program to deadlock as much as possible. Aren't you a little big biased?
|
# ? Jun 3, 2009 16:09 |
|
Ugg boots posted:Aren't you a little big biased?
|
# ? Jun 4, 2009 08:20 |
|
Scaevolus posted:He's into Big Database Software & Maintenance. Really I'm mostly just into row-level locking.
|
# ? Jun 4, 2009 11:03 |
|
I have a really noobish Access question. I'm trying to make a form that, when a control button is clicked, takes a number value from a text field and subtracts that from a field in a particular record. How do I go about doing this? Things I've figured out: I'll need to convert the text field to an integer by using CInt() I'll need to qualify which record to use with another control on the form (using a listbox) I was trying to set it so that when the button is clicked it runs UPDATE/SET/WHERE but Access wasn't having any of that. Am I retarded?
|
# ? Jun 4, 2009 16:09 |
|
I need some ideas for a good way to go about solving this problem: So, I'm working on a calculator for a specific material balance problem (ChemE major). Basically, I have 3 equations and 7 variables. I want to make a program that takes any 4 variables the user knows, and calculates the remaining 3 as outputs. I can use matrices to solve this problem by hand when given any 4 variables so I know it is possible, the only problem is that there are [7!/(3!*4!)]=35 combinations of 4 variables, meaning that there are 35 possible calculations my program will have to do. How do I code this efficiently without having to write out each possible calculation path? This is basically what I'm working with: 7 variables: A,B,C,D,E,F,G 3 equations: AB = CD + EF A = C + E G = C*V(D) + E*V(F) - A*V(B) Where V(x) is a function Given an 4 variables, find the rest. Sorry if this is too much math at once, but I can't really simplify it any more than this. I'd appreciate any ideas for how to structure code for this kind of thing. Bolded is where the core of my problem is.
|
# ? Jun 4, 2009 18:14 |
|
Program it in prolog. I can't believe there is an use for prolog.
|
# ? Jun 4, 2009 18:45 |
|
Prolog is really just a relational database, like SQL. You wouldn't write a whole app in SQL, and doing so in Prolog is also a really bad idea. But when you need it, it's really great. Also, dynamic programming/memoization might solve this problem.
|
# ? Jun 4, 2009 18:53 |
|
royallthefourth posted:Prolog is really just a relational database, like SQL. You wouldn't write a whole app in SQL, and doing so in Prolog is also a really bad idea. But when you need it, it's really great. Yes, this is a nice constraint satisfaction problem, so Prolog would do the job well. Five bucks says tef has already seen this thread and is working on a solution right now
|
# ? Jun 4, 2009 19:01 |
|
Actually I only saw the thread now. Factor would be good too as it has reversible arithmetic.
|
# ? Jun 4, 2009 19:05 |
|
|
# ? May 15, 2024 01:59 |
|
Dijkstracula posted:Yes, this is a nice constraint satisfaction problem, so Prolog would do the job well. Five bucks says tef has already seen this thread and is working on a solution right now Without the definition of the V function it's a little hard to pick an appropriate solver approach, although there are certainly some reasonable guesses.
|
# ? Jun 4, 2009 19:07 |