|
php:<? 14 mysql_query("INSERT INTO main 15 (player_name, player_class, level) VALUES 16 ($_POST["ign"], $_POST["p_class"], $_POST["level"]) ") 17 or die(mysql_error()); ?> Parse error: syntax error, unexpected '"', expecting T_STRING or T_VARIABLE or T_NUM_STRING in /directory/script.php on line 16 Did I write something wrong?
|
# ? Jul 14, 2011 16:19 |
|
|
# ? Jun 8, 2024 00:21 |
|
Userf4058 posted:
1)You've got double-quotes inside of your double-quoted string. Change these to single quotes and put curly braces around your variables. 2)You're setting yourself up for SQL injection. You should be using PDO instead. 3)Using die() and outputting your error for your users to see is not very elegant. e: with PDO you would do this php:<? $statement = $mysql->prepare("INSERT INTO main (player_name, player_class, level) VALUES (:name, :class, :level)"); $statement->execute(array(":name" => $_POST['ign'], ":class" => $_POST['p_class'], ":level" => $_POST['level'])); ?> butt dickus fucked around with this message at 16:35 on Jul 14, 2011 |
# ? Jul 14, 2011 16:29 |
|
Doctor rear end in a top hat posted:code Btw, how is using DPO's different from something like this?
|
# ? Jul 14, 2011 16:55 |
|
Userf4058 posted:wow this is great. Thank you so much. Yeah, I'm a newbie. That's more of a question for the PHP thread, but it does several things. PDO will work with other databases like MSSQL, it does prepared statements, transactions and piles of stuff.
|
# ? Jul 14, 2011 17:02 |
|
revmoo posted:I've just started working with mysql workbench, and if I'm logged into a sql server and I've selected my schema and I type: You're using single quotes instead of backticks. Try this: DROP DATABASE `nameofdb`;
|
# ? Jul 14, 2011 18:09 |
|
Thel posted:What's with fare2? What is it doing? Your restatement is precisely what I'm hoping to achieve. Goat Bastard posted:If your database supports analytic functions then you probably want something like
|
# ? Jul 14, 2011 18:16 |
|
isnoop posted:MySQL does indeed support that functionality as a basic nested query. I reformed my SQL that way and now it works. Granted, it is slower than before, but it also appears to be returning the correct data for a change. It's hard to know without seeing the real query, but I suspect that the problem with your original one was the GROUP BY. I'm not a MySQL expert, but I think if your GROUP BY clause doesn't cover all the non-aggregate columns you're trying to use then it just picks an arbitary row that satisfies it. The databases I'm more used to wouldn't even compile that, you'd get an exception saying something like "Column not part of Group By". Goat Bastard fucked around with this message at 21:58 on Jul 14, 2011 |
# ? Jul 14, 2011 21:52 |
|
I am still a SQL newbie so I am looking for some help to understand how to do a update using STUFF with T-SQL. I have a query that is logically how I want things to behave but its kind of a mess. It seems that the STUFF function does not like taking in multiple results from a subquery like this. Is there a better way that I could return the results and then enumerate over them iteratively? I would greatly appreciate any input or criticismcode:
MrDoDo fucked around with this message at 23:27 on Jul 14, 2011 |
# ? Jul 14, 2011 23:14 |
|
Reading what you are trying to do you have a few too many steps doing basically the same thing. Is the stuff command doing what it should be? Because then you should be doing something much simpler. Also from the looks of the replace text command its completely unnecessary as its going to always be the same as the field your getting in a sub query. As far as i can tell this will give you the exact same results. code:
|
# ? Jul 15, 2011 00:26 |
|
Oh I had no idea SQL used backticks ` right? Learn something new every day.
|
# ? Jul 15, 2011 01:52 |
|
No, SQL uses double quotes. MySQL uses backticks, but also accepts double quotes. Oh, and MSSQL uses loving square brackets, I am not making that up. If you want to think cross-platform, think double quotes.
|
# ? Jul 15, 2011 08:01 |
|
Sprawl posted:Reading what you are trying to do you have a few too many steps doing basically the same thing. Wow thats awesome, thank you so much for clarifying. Unfortunately I think my mind got a little too clouded about what exactly the conditions were and just threw it everywhere making this way too messy. This gives me a great new understanding. Thanks again!
|
# ? Jul 15, 2011 15:56 |
|
McGlockenshire posted:No, SQL uses double quotes. MySQL uses backticks, but also accepts double quotes. Just a quick note, you can make MySQL accept double quotes but the sql mode has to be set appropriately otherwise it thinks double quotes are just string quotes. 420 set session sql_mode = 'ANSI_QUOTES,TRADITIONAL' every day
|
# ? Jul 15, 2011 19:24 |
|
Any tips on my.cnf settings for InnoDB? Going from this site: http://www.mysqlperformanceblog.com/2006/05/30/innodb-memory-usage/ I've decided to change the following settings on our server. code:
MemTotal: 12299960 kB MemFree: 2467744 kB Our ibdata1 file is 16G The server is has 3 vCPU's, Xeon L5520 @ 2.27GHz. The Ruby/Apache server has 5, and also has 12GB. Wouldn't it make more sense to give the DB server more RAM and maybe equal out the CPU's? Not sure if it's a single quad with HT enabled, or a dual-quad with HT disabled. Maybe we should enable HT and give each server 8 vCPU?
|
# ? Jul 15, 2011 23:46 |
|
Cross-posting my problem(s) from the Django thread: (I am going through the Models chapter of the Django Book, and I am trying to set up a database. I assume that I have to create any database I want through a separate MySQL console, and that I can't do this through any Python/Django command.) For some reason both WAMP and XAMPP won't work on my Windows 7 installation; the first one throws a CLI error, and the latter won't give me access to the "admin" settings for my MySQL server. With both, I get this in the MySQL console: code:
I have played around with WAMP on my computer before (a couple of months ago), and it's possible that something else has interfered with it meanwhile, but I have no idea how the hell to troubleshoot it. There seem to be some really high-brow Windows fixes for some problems like messing around with DLLs and the registry and whatnot, but they seem way too dangerous for my taste. I have tried uninstalling and reinstalling programs and deleted folders in C:\WAMP \XAMPP, but to no avail.
|
# ? Jul 16, 2011 19:43 |
|
Why are you using wamp for django development? Just install the mysql binaries and then use the mysql workbench tool to administer it.
|
# ? Jul 16, 2011 20:29 |
|
MEAT TREAT posted:Why are you using wamp for django development? Just install the mysql binaries and then use the mysql workbench tool to administer it.
|
# ? Jul 16, 2011 20:34 |
|
ufarn posted:How so? I am just getting into Django and general deployment. Well it includes php which isn't what you use to program django, as you know, so that's just extra crap. Apache is a full web server which you really shouldn't mess with until you want to deploy your app in production. Django has a built in web server which you should use because configuring Apache to work with python is not that interesting and full of pitfalls like the ones you are having. Install mysql from the binaries on their site and continue your examples.
|
# ? Jul 17, 2011 01:01 |
|
MEAT TREAT posted:Well it includes php which isn't what you use to program django, as you know, so that's just extra crap. Apache is a full web server which you really shouldn't mess with until you want to deploy your app in production. Django has a built in web server which you should use because configuring Apache to work with python is not that interesting and full of pitfalls like the ones you are having. EDIT: The server tool also fails to set up a MySQL server for the same reason: quote:Connecting to MySQL server localhost... ufarn fucked around with this message at 10:19 on Jul 17, 2011 |
# ? Jul 17, 2011 08:48 |
|
You don't have the server installed. That's the most important part. When you install it make sure you install it as a windows service, otherwise you need to start mysql manually every time you reboot. http://dev.mysql.com/downloads/mysql/
|
# ? Jul 17, 2011 17:07 |
|
MEAT TREAT posted:You don't have the server installed. That's the most important part. When you install it make sure you install it as a windows service, otherwise you need to start mysql manually every time you reboot. code:
I should probably find out where the database is stored on my computer, too ...
|
# ? Jul 17, 2011 17:37 |
|
How do the OS numbers break down for *SQL development? Is there a general trend towards windows? Obviously anyone doing MSSQL probably isn't going to be using linux.
|
# ? Jul 17, 2011 18:36 |
|
This is off the cuff, but I'd guess that Windows is relatively dominant on the small-medium end and then probably nearly disappears on the big iron/enterprise side. I'm guessing if they're running Apache they're not using MSSQL/Running IIS not using MySQL/etc. Can't really find much concrete on database server numbers, but it's probably an extrapolation of this as I don't see too many shops mixing flavours: http://greatstatistics.com/ You might be able to get more by clicking around in this site: http://trends.builtwith.com/Web%20Server But I was only able to find web servers, not database servers (though they have a separate application frameworks section as well).
|
# ? Jul 19, 2011 21:01 |
|
angrytech posted:How do the OS numbers break down for *SQL development? Is there a general trend towards windows? Obviously anyone doing MSSQL probably isn't going to be using linux. Without getting too specific... I work for a large company. We use a handful of DB2 databases.... Couldn't tell you what they're on. There are probably only two or three big ones running DB2. Thousands of SQL Server databases running on Windows Server and thoundands of Oracle DBs on solaris. For development I use SQL Server Management Studio, Toad, DB2 Command Editor and even for the sake of just having another editor, SQL Developer (actually because it's free and the company direction is to stop paying for Toad). Anyway, we are moving generally more towards Oracle mostly. Stability wise the windows server machines have been hit or miss where as the oracle boxes don't really ever have many problems.
|
# ? Jul 21, 2011 14:45 |
|
A while back I asked about a very related issue, and I'm back begging for more help. I've got a very large source table that I want to filter into another table, by way of joining on another table, F. F looks like this: code:
The target table A looks like this: code:
code:
Same as last time, we had originally designed everything around there being only a single row per key per day. But in fact, our upstream systems are sometimes spitting out multiple records a day. Something like this: code:
insert into A select distinct F.key, day, metric, dmetric from S join F on S.key = F.key Dead simple. The distinct was kinda cheesy but it ran pretty fast so I had no reason to mess with it. As it turns out, things get really screwy downstream because of the assumption of only 1 row per day per key. I've been trying to figure out how to modify the query to basically turn the two example rows into this: code:
This is what I had at first: code:
- joining just on key means multiple rows of S because of multiple rows with the same key in different folders and subfolders - SUM oversumming in those cases So this is the new version: code:
e: one thing that's sort of inconsistent looking is the FIRST_KEY. I don't specify the same longwinded ORDER for it because all of the dupe rows should have the same timestamp, which is the only thing I care about there. But that doesn't take away from my main goal, which is an overall cleaner way to solve the problem. Oh, I tried this: code:
kimbo305 fucked around with this message at 23:31 on Jul 21, 2011 |
# ? Jul 21, 2011 23:26 |
|
I think this gives you what you want:code:
|
# ? Jul 22, 2011 01:37 |
|
Anyone know of a way to select something like ROWNUM in DB2? I see a way to order by it but I want to select it. I found a really old lovely database where the assholes have new PK and a lot of repeating records I want to delete.
|
# ? Jul 22, 2011 03:31 |
|
Aredna posted:I think this gives you what you want: Hmmmm that's really elegant, but the metric can increase or decrease. Is there way to patch that in? e: I suppose I could interpolate and use AVG for both... not quite the same behavior but probably good enough. I'll perf test that. kimbo305 fucked around with this message at 05:18 on Jul 22, 2011 |
# ? Jul 22, 2011 04:54 |
|
I almost made a comment about assuming it was strictly increasing for that to work too. Since it can go either way you'll have to use rank (or row_number) to get the last record for the day based on the timestamp, which is almost always going to run slower. This should work too and should be faster if few keys are filtered out by the join to F. code:
|
# ? Jul 22, 2011 05:21 |
|
S is huge and F is tiny, as is the filtered result set, so that last query is not gonna be ideal. It seems similar to the 2nd variant I posted, which is very slow. Let's see, S is 25 billion rows and 280 million distinct keys, F is 1.4 mil rows and 1.1 mil distinct keys. The resulting filtered table is e: 360 mil rows. We're using Vertica, so there's no indexes. F is an unsegmented table because it's so small, and S is a single super-projection that segments on key and orders by key, day. kimbo305 fucked around with this message at 05:46 on Jul 22, 2011 |
# ? Jul 22, 2011 05:39 |
|
benisntfunny posted:Anyone know of a way to select something like ROWNUM in DB2? I see a way to order by it but I want to select it. I found a really old lovely database where the assholes have new PK and a lot of repeating records I want to delete. row_number() OVER (ORDER BY whatever)
|
# ? Jul 22, 2011 08:00 |
|
kimbo305 posted:S is huge and F is tiny, as is the filtered result set, so that last query is not gonna be ideal. It seems similar to the 2nd variant I posted, which is very slow. That's a bit larger than I was expecting. I've not familiar Vertica, but the segmented setup sounds similar to Greenplum's distributed architecture that I've used before. With Greenplum I've found performance benefits in similar situations by breaking it up into two steps, where the first step would finding the distinct keys in F and placing them into a them in a table segmented in the same manner as S. Goat Bastard posted:
This may not work for deleting the duplicated records, but if all repeated records are 100% unique you can select all of the distinct ones into a new table, delete the old table, and then rename the new one. If you are worried about views or anything else breaking a safer solution would be to then delete all of the records out of the old table and insert everything from the new table into the old one. Both solutions are just taking a sledgehammer to the problem so if you need something more elegant you can modify the second suggestion to only select/delete/insert repeated records.
|
# ? Jul 22, 2011 15:25 |
|
kimbo305 posted:
I would do something like this: code:
Alternatively: code:
Both these queries assume that Vertica has in or exists or the equivalent (and if it has both, try using exists if in performs poorly). If it has neither, then I do think joining to (select distinct key from F) instead of just F is the way to go. If that really performs horribly, then Aredna is almost certainly on the right track in terms of creating a separate table that has just the distinct keys from F. Also, part of the problem is that Vertica isn't a normal OTLP-style SQL database, so even though it uses SQL syntax, it behaves differently than what I think most people reading this thread are used to. Jethro fucked around with this message at 17:40 on Jul 22, 2011 |
# ? Jul 22, 2011 17:32 |
|
If I want to select a set of specific records from a table (where you would normally use an IN statement), but have a composite primary key, is there any better way to do it than a mess like this:code:
vvv EDIT: At first they all said 'y1', sorry!, it's a matched x,y pair for each primary key 1 & primary key 2. Rescue Toaster fucked around with this message at 19:51 on Jul 22, 2011 |
# ? Jul 22, 2011 19:40 |
|
Rescue Toaster posted:
This? code:
|
# ? Jul 22, 2011 19:48 |
|
Sorry, I had it written wrong. There's a matched pair of primary key 1 and primary key 2. It would look more like (not valid SQL obviously) WHERE (Pkey1, Pkey2) IN ((x1,y1),(x2,y2),(x3,y3),(x4,y4)) Both values must be specific to return that row.
|
# ? Jul 22, 2011 19:50 |
|
Rescue Toaster posted:No, There's a matched pair of primary key 1 and primary key 2. It would look more like (not valid SQL obviously) Oh. Your original post only contained y1. E: I see...
|
# ? Jul 22, 2011 19:51 |
|
Rescue Toaster posted:Sorry, I had it written wrong. There's a matched pair of primary key 1 and primary key 2. It would look more like (not valid SQL obviously) WHERE CONCAT(Pkey1, Pkey2) IN ('hurf','durf',etc...) I think that will work. You might run into an issue where a row has Pkey1 = 'hu' and Pkey2 has 'rf' will match as well as another row with Pkey1 = 'h' and Pkey2 = 'urf'. You could cut down on this by concatenating something between them, like a pipe.
|
# ? Jul 22, 2011 19:59 |
|
I will have to check but I'm virtually certain in the table of interest they are ints. I think I'll just say screw it and build the multiple AND statements, it won't be too hard, just a little silly.
|
# ? Jul 22, 2011 21:07 |
|
|
# ? Jun 8, 2024 00:21 |
|
Jethro posted:Unless you mis-typed your sum(dmetric), you're still "over summing" since you didn't include the folder and subfolder. Also, most normal aggregates don't take an order by when you use them as analytic functions. The correct version should be: code:
quote:You don't need the FIRST_KEY because the whole point of the RANK() function is to pick out the rows that have the "first key". quote:
I'll try these to see how Vertica's query planner estimates them. The temp table is probably still going to be faster, but I'm lazy and would rather not have to set that up. kimbo305 fucked around with this message at 22:08 on Jul 22, 2011 |
# ? Jul 22, 2011 22:06 |