Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
npe
Oct 15, 2004
Unindexed, I get identical plans:

code:
---------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   184 |  1656 |    56   (2)|
|*  1 |  TABLE ACCESS FULL| JOBS |   184 |  1656 |    56   (2)|
---------------------------------------------------------------
The oracle optimizer is pretty good most of the time, and crappy developers always try to second guess it. I can't imagine a scenario where the CBO is fooled into doing a worse scan using IN() than it would for a DECODE().

Adbot
ADBOT LOVES YOU

Victor
Jun 18, 2004

Lancer383 posted:

I'm fairly sure that she is delirious
measure measure measure measure

Xae
Jan 19, 2005

yaoi prophet posted:

Unindexed, I get identical plans:

code:
---------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)|
---------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   184 |  1656 |    56   (2)|
|*  1 |  TABLE ACCESS FULL| JOBS |   184 |  1656 |    56   (2)|
---------------------------------------------------------------
The oracle optimizer is pretty good most of the time, and crappy developers always try to second guess it. I can't imagine a scenario where the CBO is fooled into doing a worse scan using IN() than it would for a DECODE().

This. Trying to game the CBO usually ends up with you failing. There are times when the CBO just plain ol' fucks up, but those are rare. The query as you write it is almost never as it is executed.

I see the same poo poo with COUNT(1) vs COUNT(*) constantly, or people claiming that subselects are faster than joins or some other poo poo. The CBO is good, real good.

Victor
Jun 18, 2004
That being said, shittier query optimization engines can, well, be lovely. Hence the measure thing I said above.

Uziel
Jun 28, 2004

Ask me about losing 200lbs, and becoming the Viking God of W&W.

noonches posted:

Use the SUM() function on the charge and group by the account number.
That would work if my sample size was all there is, but unfortunately I have a few hundred thousand records and I need it to only show one instance of an account number and the total charges tied to that account number. Simply summing charges and grouping the account number doesn't decrease the amount of data that needs to be looked at.

I have it working via reports, but I need to be able to query the data.

Any ideas?

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

Victor posted:

That being said, shittier query optimization engines can, well, be lovely. Hence the measure thing I said above.

Yeap, totally agree. I wish I could, but I'm not able to do an EXPLAIN PLAN the way that the DBA's have our privileges set up.

Victor
Jun 18, 2004
Then complain, or just run big enough queries so that you can detect any relevant execution time differences.

noonches
Dec 5, 2005

It represents my lifestyle and stratus as a street savy irreverent youth, who lives large, and yet hungers for the next level in life.

Uziel posted:

That would work if my sample size was all there is, but unfortunately I have a few hundred thousand records and I need it to only show one instance of an account number and the total charges tied to that account number. Simply summing charges and grouping the account number doesn't decrease the amount of data that needs to be looked at.

I have it working via reports, but I need to be able to query the data.

Any ideas?

You only want one account number? Then just filter by that account number. Maybe I don't understand what you need to do.

npe
Oct 15, 2004

Lancer383 posted:

Yeap, totally agree. I wish I could, but I'm not able to do an EXPLAIN PLAN the way that the DBA's have our privileges set up.

If you can create tables in your user schema, you can create your own plan table and use EXPLAIN PLAN INTO.

See http://www.adp-gmbh.ch/ora/explainplan.html
For the table details, see http://www.adp-gmbh.ch/ora/exp_plan/plan_table.html

Here's an example:

code:
SQL> create table my_plans (
        statement_id       varchar2(30),
        plan_id            number,
        timestamp          date,
        remarks            varchar2(4000),
        operation          varchar2(30),
        options            varchar2(255),
        object_node        varchar2(128),
        object_owner       varchar2(30),
        object_name        varchar2(30),
        object_alias       varchar2(65),
        object_instance    numeric,
        object_type        varchar2(30),
        optimizer          varchar2(255),
        search_columns     number,
        id                 numeric,
        parent_id          numeric,
        depth              numeric,        
        position           numeric,
        cost               numeric,
        cardinality        numeric,
        bytes              numeric,
        other_tag          varchar2(255),
        partition_start    varchar2(255),
        partition_stop     varchar2(255),
        partition_id       numeric,
        other              long,
        other_xml          clob,
        distribution       varchar2(30),
        cpu_cost           numeric,
        io_cost            numeric,
        temp_space         numeric,
        access_predicates  varchar2(4000),
        filter_predicates  varchar2(4000),
        projection         varchar2(4000),
        time               numeric,
        qblock_name        varchar2(30)
 38  );

Table created.

SQL> explain plan into my_plans for 
  2    select 1 from dual;

Explained.

SQL> select * from table(dbms_xplan.display('my_plans', null, 'typical'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
Plan hash value: 1388734953

-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |     1 |       |     2   (0)| 00:00:01 |
|   1 |  FAST DUAL       |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

8 rows selected.

npe fucked around with this message at 17:27 on Mar 6, 2008

Victor
Jun 18, 2004

Uziel posted:

That would work if my sample size was all there is, but unfortunately I have a few hundred thousand records and I need it to only show one instance of an account number and the total charges tied to that account number. Simply summing charges and grouping the account number doesn't decrease the amount of data that needs to be looked at.
What? Grouping by account number => one record per account number. Isn't this what you want?

Uziel
Jun 28, 2004

Ask me about losing 200lbs, and becoming the Viking God of W&W.

Victor posted:

What? Grouping by account number => one record per account number. Isn't this what you want?
Actually, yes. Grouping by account number worked. I initially misunderstood. Thanks to both of you.

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

yaoi prophet posted:

Yaoi showing me how to create my own table_plan table -- thanks!

Alrighty, so I did my own execution plan, and here's what it came back with (note, controlled_group_status is a non-indexed, non-primary key field):

code:
--------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |  5623 |   691K|    41 |
|*  1 |  TABLE ACCESS FULL   | EMPLOYEE    |  5623 |   691K|    41 |
--------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
 
   1 - filter("EMPLOYEE"."CONTROLLED_GROUP_STATUS"='A' OR 
              "EMPLOYEE"."CONTROLLED_GROUP_STATUS"='L' OR 
              "EMPLOYEE"."CONTROLLED_GROUP_STATUS"='P')
 
 Note: cpu costing is off
  
 --------------------------------------------------------------------
 | Id  | Operation            |  Name       | Rows  | Bytes | Cost  |
 --------------------------------------------------------------------
 |   0 | SELECT STATEMENT     |             |   224 | 28224 |    41 |
 |*  1 |  TABLE ACCESS FULL   | EMPLOYEE    |   224 | 28224 |    41 |
 --------------------------------------------------------------------
  
 Predicate Information (identified by operation id):
 ---------------------------------------------------
  
    1 - filter(DECODE("EMPLOYEE"."CONTROLLED_GROUP_STATUS",'A',1,'L',
               1,'P',1,0)=1)
  
Note: cpu costing is off
Can anyone explain this to me? I don't totally understand execution plans, but this looks like her way is faster.

Victor
Jun 18, 2004
code:
Note: cpu costing is off
?

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

Victor posted:

code:
Note: cpu costing is off
?

Yeah, I don't know. Looked up that message on Google and it looks like something that needs to be enabled in the DB's settings, and I'm afraid of venturing into the configuration of the DB, as I can imagine the DBA's wouldn't be too crazy about that.

Victor
Jun 18, 2004
Well, the fact that the "Cost" column is 41 for both examples, I'm wary of the other statistics. You just aren't giving us enough information to make any sound judgments. I can make almost any statement about optimizing SQL statements seem true if I try hard enough. You know the whole bit about making statistics say anything...

npe
Oct 15, 2004

Lancer383 posted:

Yeah, I don't know. Looked up that message on Google and it looks like something that needs to be enabled in the DB's settings, and I'm afraid of venturing into the configuration of the DB, as I can imagine the DBA's wouldn't be too crazy about that.

Ignore CPU costing for now, it's not going to help.

Those plans do indicate that the decode is faster, but I don't know why. Well, I do know why -- you're getting a bad plan when using the IN list. I don't know why that is though. What version of Oracle is this?

Edit: gently caress it

npe fucked around with this message at 21:56 on Mar 6, 2008

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

yaoi prophet posted:

Ignore CPU costing for now, it's not going to help.

Those plans do indicate that the decode is faster, but I don't know why. Well, I do know why -- you're getting a bad plan when using the IN list. I don't know why that is though. What version of Oracle is this?

Also, can you post the full queries (even if you have to obscure tables and values a bit)?

I believe it's 8/8i. We're forced to execute SQL statements through this old program called Infomaker that I believe is really intended to be a report-building program, but it's all they give us.

Anywhere, here's what outputs those execution plans:

code:
create table my_plans (
        statement_id       varchar2(30),
        plan_id            number,
        timestamp          date,
        remarks            varchar2(4000),
        operation          varchar2(30),
        options            varchar2(255),
        object_node        varchar2(128),
        object_owner       varchar2(30),
        object_name        varchar2(30),
        object_alias       varchar2(65),
        object_instance    numeric,
        object_type        varchar2(30),
        optimizer          varchar2(255),
        search_columns     number,
        id                 numeric,
        parent_id          numeric,
        depth              numeric,        
        position           numeric,
        cost               numeric,
        cardinality        numeric,
        bytes              numeric,
        other_tag          varchar2(255),
        partition_start    varchar2(255),
        partition_stop     varchar2(255),
        partition_id       numeric,
        other              long,
        other_xml          clob,
        distribution       varchar2(30),
        cpu_cost           numeric,
        io_cost            numeric,
        temp_space         numeric,
        access_predicates  varchar2(4000),
        filter_predicates  varchar2(4000),
        projection         varchar2(4000),
        time               numeric,
        qblock_name        varchar2(30)
 );

explain plan into my_plans for
SELECT	*
from	employee
where	controlled_group_status IN ('A','L','P');

select * from table(dbms_xplan.display('my_plans', null, 'typical'));

explain plan into my_plans for
SELECT	*
from	employee
where	decode(controlled_group_status,'A',1,'L',1,'P',1,0)=1;

select * from table(dbms_xplan.display('my_plans', null, 'typical'));

drop table my_plans;

npe
Oct 15, 2004

Lancer383 posted:

I believe it's 8/8i. We're forced to execute SQL statements through this old program called Infomaker that I believe is really intended to be a report-building program, but it's all they give us.
[/code]

The thing is, you're getting full table scans out of both. This is going to suck no matter how you slice it. While the plans you posted (and some of the plans I get when trying similar syntax) seem to indicate a higher row cost, I don't think this is necessarily true (they tend to be just guesses from the CBO). I don't believe they actually indicate much of a difference, full scans are expensive no matter what.

I'm going to keep poking at it, though. Generally speaking, you want to be most concerned with the access path more than anything. In this case, they are both full scans so I really don't think there's an appreciable difference.

This applies for me too, I made a mistake earlier in calling hers "slower" and using the estimated row/byte cost. A coworker had to remind me not to trust those, and that the two are probably equivalent. Hey, I'm a developer and not a DBA. :)

npe fucked around with this message at 22:06 on Mar 6, 2008

Victor
Jun 18, 2004
You also have to be VERY careful about measuring the 1% of the query cost. Decoding/processing IN lists should be constant-time, at least somewhat linear with the size of the list. If you run this against a very small table, then the decoding/in list processing time will dominate. However, in a real situation, things change. This is why "measure measure measure measure" is IMPORTANT ADVICE. And none of this fake measurement crap. If you don't follow this advice, you're going to screw up over and over, optimize things that don't need optimizing, and missing what's actually important. Don't be one of those people who optimizes for one user and fails miserably to handle more than then.

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

Victor posted:

You also have to be VERY careful about measuring the 1% of the query cost. Decoding/processing IN lists should be constant-time, at least somewhat linear with the size of the list. If you run this against a very small table, then the decoding/in list processing time will dominate. However, in a real situation, things change. This is why "measure measure measure measure" is IMPORTANT ADVICE. And none of this fake measurement crap. If you don't follow this advice, you're going to screw up over and over, optimize things that don't need optimizing, and missing what's actually important. Don't be one of those people who optimizes for one user and fails miserably to handle more than then.

I totally agree with you -- just got off the phone with one of our DBA's, and he was delighted to help me out in setting up the plan_table on the DB I was using -- he was happy to hear from someone who knew something about writing queries and quickly launched into the horror shows of queries he has seen people write on the logs that print out to six pages of totally unnecessary UNIONed queries.

He said that there has been some discussion regarding setting up all of the DB's so that people can run EXPLAIN PLAN on them, and hopefully it will be something that I can take advantage of on a regular basis without having to create new objects on the database.

npe
Oct 15, 2004
Sounds like you are on the right track. Always get in good with your DBA's, they will help you find poorly performing queries in production and if you're lucky, they will go as far as to rewrite things for you if they know a better way.

Also, I completely forgot to note that this is on Oracle 8, so yeah... that sucks. :(

Victor
Jun 18, 2004
Yay for showing your DBAs you're competent. If you keep it up and change companies when you hit walls of retardation, you should go places. :)

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.
Thanks to both of you guys for helping me out with this - I do appreciate it.

Yeah, it's amazing how a huge company like the one I work for has such messed up policies with regards to their databases - everyone uses the same login and password to hit the client databases, updates and deletes are allowed in the prod environment -- it's crazy, especially given the sensitive data we deal with, and, I fear, probably the way that a lot of companies operate (I apologize for the massive sentence).

Quick question for you guys: are you aware of any programs out there for running batches of queries, and then writing them to Excel (or CSV, whatever)? We have to run plenty of data audits (one of my current client has over 250 queries -- I would have done it differently as it should really be just five queries with stuff in tables rather than hardcoding, but it wasn't mine to start with), so I wrote a Python program to take in a query file that looks like this:

code:
Name1 : Description 1 :

Select * from table1 where something=something;

Name2 : Description 2 :

Select * from table2 where something=something;
It uses Cx_Oracle (a python module to connect to Oracle) and PyExcelerator (to write binary Excel files), and it works pretty darn good (ran the 250 queries and wrote them to Excel in 2 minutes, complete with a summary page to display the names, descriptions, and row counts for every query, as well as links to the individual results sheets, etc). Very nice for desktop use.

My manager likes it so much, though, that he wants it to be someone that everyone can use, have it be run through our scheduling tool, etc., and the question has come up of are there other comparable products out there so that we can decide what makes the best business case as an "enterprise tool."

While my tool is pretty decent (and I'd be more than willing to share the Python code and/or executable with anyone who might have the same needs), I'm having trouble finding something else out there like what I've put together. It seems nuts that this would be the first time someone would need a tool like this, so I have to think that there's something else out there like it, and if it has better support, features, etc., that's what we might go for.

Lancer383 fucked around with this message at 04:22 on Mar 7, 2008

npe
Oct 15, 2004
I've never heard of anything, but I'm not really paying attention on that end of things. Personally, I'd just stick with the homebrew app, it will be less lovely than whatever "enterprise" solution is sold to you, and will be customizable in the future as well. Just my opinion though.

Sergeant Hobo
Jan 7, 2007

Zhu Li, do the thing!
I am taking a class where we are using Oracle 10g. I want to know if there a way to do one command that can be used to drop all the table at once (mainly because I don't feel like typing drop table tablename; a billion times, not that I'm lazy or anything). A quick Google search came up with this command:

code:
select 'drop table ', table_name, 'cascade constraints \;' from user_tables;
but that doesn't seem to work as it only is showing the tables (though it did lead to a nifty command to show the constraints in a table).

Fenderbender
Oct 10, 2003

You have the right to remain silent.
I'm more or less trying to figure out why this is taking 3 minutes to parse and how to optimize it.

code:
SELECT   USERS.USER                      AS USER,
         USERS.FULLNAME                  AS FULL_NAME,
         USERS.ROLE                      AS ROLE,
         COUNT(DISTINCT TIMING.SESSION)  AS SESSION_COUNT
FROM     TIMING
JOIN     USERS
WHERE    TIMING.USER = USERS.USER
         AND ARGS != '-f:'
         AND TIMING.ENTRYTIME BETWEEN '2008-02-07' AND '2008-02-08'
GROUP BY USER
ORDER BY SESSION_COUNT DESC
LIMIT    30;
There are 1,668,924 rows in the table "timing" and over 2,000 in "users." I'm using the table users so as to retrieve user information. Users just contains information about the user, but most important the "user" field is their username and correlates to the same field in the timing table. I'm wanting to return their username, full name, role, along with how many different session this person has used. Session IDs are listed under timing.session.

Sorry if I'm not being descriptive enough. Ask away if I'm being unclear.

Edit: Oh and MySQL 5.x

Fenderbender fucked around with this message at 17:03 on Mar 7, 2008

var1ety
Jul 26, 2004

Sergeant Hobo posted:

I am taking a class where we are using Oracle 10g. I want to know if there a way to do one command that can be used to drop all the table at once (mainly because I don't feel like typing drop table tablename; a billion times, not that I'm lazy or anything). A quick Google search came up with this command:

code:
select 'drop table ', table_name, 'cascade constraints \;' from user_tables;
but that doesn't seem to work as it only is showing the tables (though it did lead to a nifty command to show the constraints in a table).

Run something like this. You can instantiate it as a procedure for easy reuse.

code:
begin
for row in (select table_name from user_tables)
loop
    execute immediate 'drop table ' || row.table_name;
end loop;
end;

gtfowned
Jun 28, 2005
dunce
What would be the best way to create a table (or series of tables) to store a multitude of comments.

For example, lets say I wanted to host a website that had some products on it with personal and user reviews on it.There will be several hundred items in the table and the possibility for an unlimited number of reviews. Would it be best to have 1 giant table chock-full of comments with a reference value pointing to the table full of items? (I don't anticipate the comments going over 2,000 entries or so, if ever) Or would it be better to make each item contain a text/blob variable for comments?

Go easy on me, I'm new to SQL.

noonches
Dec 5, 2005

It represents my lifestyle and stratus as a street savy irreverent youth, who lives large, and yet hungers for the next level in life.
You should certainly created a linked table for the comments with keys pointing to the item and the user who made comments. It's a much better way than trying to create a blob holding all the info.

If you created a text blob, how would you generate a count of comments? Or pull all the comments made by a specific user?

other people
Jun 27, 2004
Associate Christ
I just know the basics when it comes to sql. I get excited when I do a left join and it works.

I have a mysql 5 db with a simple table. Just imagine I have a table with id, user_id, post_text, post_hidden. post_hidden is 0 if the post is visible, and 1 if the post should not be visible.

I would like to grab all the posts that are visible, plus the hidden posts of the current user. I dont want the user to be able to see hidden posts that are not his.

So I need all posts minus the hidden posts not made by the user. That is a bit more than I can wrap my head around in sql.

MORE CURLY FRIES
Apr 8, 2004

Kaluza-Klein posted:

I just know the basics when it comes to sql. I get excited when I do a left join and it works.

I have a mysql 5 db with a simple table. Just imagine I have a table with id, user_id, post_text, post_hidden. post_hidden is 0 if the post is visible, and 1 if the post should not be visible.

I would like to grab all the posts that are visible, plus the hidden posts of the current user. I dont want the user to be able to see hidden posts that are not his.

So I need all posts minus the hidden posts not made by the user. That is a bit more than I can wrap my head around in sql.

Try something like
COUNT(post_text) as "hidden posts" WHERE post_hidden = 0

Or similar. Not too sure on the syntax myself right now.

IsaacNewton
Jun 18, 2005

You should do something along the lines of:
code:
SELECT * FROM posts WHERE post_hidden = 0 OR user_id=?
Giving the current user id as parameter to the query.

No Safe Word
Feb 26, 2005

Kaluza-Klein posted:

I just know the basics when it comes to sql. I get excited when I do a left join and it works.

I have a mysql 5 db with a simple table. Just imagine I have a table with id, user_id, post_text, post_hidden. post_hidden is 0 if the post is visible, and 1 if the post should not be visible.

I would like to grab all the posts that are visible, plus the hidden posts of the current user. I dont want the user to be able to see hidden posts that are not his.

So I need all posts minus the hidden posts not made by the user. That is a bit more than I can wrap my head around in sql.
Why wouldn't this work? (I think it does just fine)
code:
SELECT
   *
FROM
   table
WHERE
  user_id = @current_user_id
  OR
  post_hidden = 0
e: f;b :argh:

Little Brittle
Nov 1, 2004

Come visit me dawg
I have a table set up like this:

code:
--
-- Table structure for table `product_feed`
--

`product_feed` (
  `sku` varchar(24) NOT NULL,
  `brand` varchar(48) NOT NULL,
  `prodname` varchar(48) NOT NULL,
  `link` text NOT NULL,
  `keywords` text NOT NULL,
  `date` varchar(24) NOT NULL,
  `category` varchar(24) NOT NULL,
  UNIQUE KEY `sku` (`sku`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
How would I create efficient queries and indexes for partial matches? Say I wanted to search the 'brand,prodname,keywords' fields for keyword 'Sony'. Ideally, it would return all SKUs that contain that keyword in any of those three fields. I can't find a nice way to get partial matches without doing a full text search (MATCH AGAINST). Is there a more efficient way of doing it? How would I index the fields to put the least strain on the server?

other people
Jun 27, 2004
Associate Christ

No Safe Word posted:

Why wouldn't this work? (I think it does just fine)

e: f;b :argh:

It does work! I just was not familiar with OR WHERE. Thank you all.

Xae
Jan 19, 2005

Little Brittle posted:

I have a table set up like this:

code:
--
-- Table structure for table `product_feed`
--

`product_feed` (
  `sku` varchar(24) NOT NULL,
  `brand` varchar(48) NOT NULL,
  `prodname` varchar(48) NOT NULL,
  `link` text NOT NULL,
  `keywords` text NOT NULL,
  `date` varchar(24) NOT NULL,
  `category` varchar(24) NOT NULL,
  UNIQUE KEY `sku` (`sku`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1
How would I create efficient queries and indexes for partial matches? Say I wanted to search the 'brand,prodname,keywords' fields for keyword 'Sony'. Ideally, it would return all SKUs that contain that keyword in any of those three fields. I can't find a nice way to get partial matches without doing a full text search (MATCH AGAINST). Is there a more efficient way of doing it? How would I index the fields to put the least strain on the server?
It depends on your DB.
MS SQL Server
MySQL
Postgres
Oracle

Just google "Full Text Search" and the name of your DB, looks like MySQL.

other people
Jun 27, 2004
Associate Christ
Hey look I am back again.

So I have a table, movie_titles, and another table, movie_codec. movie_titles.codec_id joins on movie_codec.codec_id. movie_codec was a mess, so I emptied it and made new row values (I have backups of course!).

I know which old codec_id values correspond to the new ones, so I was about to do a simple UPDATE when it occurred to me I will just be screwing myself.

Lets say old id 1 is now 2. I tell it to update all rows where codec_id is 1 and change it to 2. Now when I ask it to change old id 2 to it's new value it won't know which 2's are old (that need to be changed) versus the new 2's that don't need to be changed.

Is there some fancy way to do this all at once? I am sure I can come up with some php that will do it lots of arrays and junk, but it sure would be nice to avoid that.


Can I just do a whole bunch of SET/WHERE clauses in the same query one after the other? When it gets to the second set/where clause will it still be matching from the original SELECT data or will it include the new values?

MORE CURLY FRIES
Apr 8, 2004

create a new column (idOld or something) and update that column with the current values.

Then run the update based on idOld (UPDATE whatever with whatever WHERE idOld = oldid)

Then drop the oldID column once you're done.

other people
Jun 27, 2004
Associate Christ

MORE CURLY FRIES posted:

create a new column (idOld or something) and update that column with the current values.

Then run the update based on idOld (UPDATE whatever with whatever WHERE idOld = oldid)

Then drop the oldID column once you're done.

That seems so obvious I feel like I have been slapped in the face. Thank you!

I was trying to do the exact same thing but instead of copying a column I had copied the entire table and was trying to compare from the 2nd table.

Adbot
ADBOT LOVES YOU

Orbis Tertius
Feb 13, 2007

I've been teaching myself how to do MySQL stuff using PHP, and I've been doing pretty good so far, but I'm getting nowhere with my current problem. Here's what I'm trying to do:

I want to have a form that lets the user search a number of variables that are in several different tables and I want the resulting table to only display columns for the fields that were filled in on the form.

I'm having trouble figuring out how to do a SELECT query that will join only the necessary tables and search only the necessary variables based on what form fields the user fills in.

The example below is the smaller problem I'm working on now. There are two columns Age and Gender, in a single table, 'demographics'. If on the form the user only fills in the field for Age, then I want to just search the table by Age. Same with Gender. If the user enters values for both Age and Gender I want to search the table by both values.

So right now I'm just trying to figure out if it's possible to have different WHERE clauses based on whether PHP variables are NULL or not. Here's my last trial-and-error attempt, where $age and $gender will hold the values entered in the form for Age and Gender or NULL if the field was left blank:

code:
SELECT * FROM demographics WHERE IF($age IS NOT NULL, AND age = $age, '') IF($gender IS NOT NULL, AND gender = $gender, '')
I know that's way off track, but I hope it shows what I'm trying to do. I'd MUCH rather have a huge SQL statement that gives me an array with all the right entries than have to create all the possible SQL queries in PHP to do the same.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply