Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
chocojosh
Jun 9, 2007

D00D.

hexadecimal posted:

I sort by different criteria, but index is a primary key right? that is only applicable to one column in a table?

http://www.google.ca/search?q=database+index&ie=utf-8&oe=utf-8&aq=t&rls=com.ubuntu:en-US:unofficial&client=firefox-a

Index (database) posted:

A database index is a data structure that improves the speed of operations on a database table. Indexes can be created using one or more columns of a database table, providing the basis for both rapid random look ups and efficient access of ordered records.

A primary key uniquely identifies the record. Normally the DBMS makes it an index (because you are likely to filter your table records by primary key) although I recall reading that older versions of MySQL does not do this by default (I have not checked the recent versions).

Indexes are used to make lookup fasters. Think SELECT posts WHERE userid = 'hexadecimal' From what I understand the most simple index will simply be a hash table on the appropriate column so that instead of doing O(log n) (sorted data) or O(n) (unsorted data) to find the record that has the specified value, it uses the hash table to do an O(1) lookup. DBMSs do offer more advanced indexes, but I've never needed to use them.

This was covered in my university undergraduate database course and I don't go to a fancy university. It also could've been answered really easily by google. I'm seriously wondering how you're in a master's program.

Adbot
ADBOT LOVES YOU

jovial_cynic
Aug 19, 2005

code:
SELECT count( ID ) , dateFromDateTable AS dateAdded
FROM tblUsers
INNER JOIN 
(
  SELECT '2009-01-01' AS dateFromDateTable
  UNION
  SELECT '2009-01-02' AS dateFromDateTable
  UNION
  SELECT '2009-01-03' AS dateFromDateTable
  ) AS tblDate 
ON dateFromDateTable >= dateAdded
GROUP BY tblDate.dateFromDateTable
On second thought, this isn't at all what got me those results, because further attempts to use this exact query result in an error that I can't figure out...

Victor
Jun 18, 2004
hex, I recommend that you read Database in Depth, by C. J. Date.

functional
Feb 12, 2008

PostgreSQL?

What's a good way to learn PostgreSQL, preferably with a PHP spin? Not looking to relearn SQL, except as it pertains to Postgres.

functional fucked around with this message at 14:55 on Jan 13, 2009

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

hexadecimal posted:

I sort by different criteria, but index is a primary key right? that is only applicable to one column in a table?
No.

Primary Keys are a type of index (unique, NOT NULL for all columns in the key), but not all indexes are primary keys.

EDIT: oops, hex was so wrong that I had to reply before I saw that there was a second page.

Jethro fucked around with this message at 18:26 on Jan 12, 2009

Atom
Apr 6, 2005

by Y Kant Ozma Post
running a query like this:

code:
SELECT su.ID 

FROM search_users su 

WHERE su.Theme IN 
    (SELECT t.ID FROM themes t WHERE t.non_nude=0)

LIMIT 10;
Explain tells me my subquery is "dependant." Why is this so? I can't for the life of me figure out why and it's annoying. Should I not even worry about it?

edit: I think it's because of the fact that the second query is being replaced with an index lookup function (unique_subquery query type). However as this table is only 100 rows long, I think that MySQL caching the entire resultset in the query cache would be faster than doing an index lookup for each row. Can anyone confirm my predictions and tell me where to go from here?

Atom fucked around with this message at 20:51 on Jan 12, 2009

FamousThomas
Jul 16, 2006

Hey, Fry. Wake up, it's me, BIGFACE!
mySQL Question

I've got a site with a huge volume of traffic and I've noticed my "Handler_read_rnd_next" is getting huge to put it lightly. The only explaination I can find for this is:

"The number of requests to read the next row in the data file. This is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have."

All my tables are indexed properly. Is there anything that generally sends this shooting up so high? The page does a lot of grabbing little bits of information and writing. but nothing too crazy.

Pento McGreno
May 4, 2004

Cheese.

FamousThomas posted:

mySQL Question

I've got a site with a huge volume of traffic and I've noticed my "Handler_read_rnd_next" is getting huge to put it lightly. The only explaination I can find for this is:

"The number of requests to read the next row in the data file. This is high if you are doing a lot of table scans. Generally this suggests that your tables are not properly indexed or that your queries are not written to take advantage of the indexes you have."

All my tables are indexed properly. Is there anything that generally sends this shooting up so high? The page does a lot of grabbing little bits of information and writing. but nothing too crazy.

Do you do a lot of the following kind of queries? They'll both cause Handler_read_rnd_next to increase.
  • Queries with no WHERE statements:
    code:
    mysql> SELECT * FROM tbl;
  • Queries with range scans:
    code:
    mysql> SELECT * FROM tbl WHERE a > 10 AND a < 20;

Realistically, it's not actually a problem unless you're noticing performance issues.

Pento McGreno
May 4, 2004

Cheese.

Atom posted:

running a query like this:

code:
SELECT su.ID 

FROM search_users su 

WHERE su.Theme IN 
    (SELECT t.ID FROM themes t WHERE t.non_nude=0)

LIMIT 10;
Explain tells me my subquery is "dependant." Why is this so? I can't for the life of me figure out why and it's annoying. Should I not even worry about it?
Probably nothing to worry about. Though, you can easily re-write this query as a JOIN. It'll save you the potential future headache of dealing with MySQL subqueries ('scuse the poor formatting).
code:
SELECT su.ID
FROM search_users su
LEFT JOIN themes t
ON su.Theme = t.ID
WHERE t.non_nude=0
LIMIT 10;

Pento McGreno
May 4, 2004

Cheese.

functional posted:

PostgreSQL?

What's a good way to learn PostgreSQL, preferably with a PHP spin? Not looking to relearn SQL, except as it pertains to Postgres.

Read the PHP PG docs:
http://php.net/pgsql

Read the plethora of tutorials on using PG and PHP:
http://www.google.com/search?q=postgresql+php+tutorial

chocojosh
Jun 9, 2007

D00D.
Do you have any tool that you recommend for drawing database diagrams for team discussions/design review?

My university recommends using ERDs (this is being used to help my team on an undergrad project course) but I find they're a pain to create and maintain. I much prefer having all of the attributes inside the same box as the table, like when you create a database diagram.

I would prefer if it would be available in LINUX although I do have access to Windows. I have also heard that there are some tools that will auto-generate SQL statements which would be pretty neat but definitely not something I need.

functional
Feb 12, 2008

Pento McGreno posted:

Read the plethora of tutorials on using PG and PHP:
http://www.google.com/search?q=postgresql+php+tutorial

Right... Not that your recommendation to use Google isn't helpful, but actually I was hoping that someone could save time by pointing to something in particular. The problem with tutorials on popular subjects is that there are many of them but typically only one worth reading.

The PHP PG docs are a good idea. I didn't think that they would have those.

Atom
Apr 6, 2005

by Y Kant Ozma Post

Pento McGreno posted:

Probably nothing to worry about. Though, you can easily re-write this query as a JOIN. It'll save you the potential future headache of dealing with MySQL subqueries ('scuse the poor formatting).

At the end of the day I just constructed a giant IN() statement (with the resultset of what would have been the subquery) and it seems to be the fastest way.

megalodong
Mar 11, 2008

Atom posted:

running a query like this:

code:
SELECT su.ID 

FROM search_users su 

WHERE su.Theme IN 
    (SELECT t.ID FROM themes t WHERE t.non_nude=0)

LIMIT 10;
Explain tells me my subquery is "dependant." Why is this so? I can't for the life of me figure out why and it's annoying. Should I not even worry about it?

edit: I think it's because of the fact that the second query is being replaced with an index lookup function (unique_subquery query type). However as this table is only 100 rows long, I think that MySQL caching the entire resultset in the query cache would be faster than doing an index lookup for each row. Can anyone confirm my predictions and tell me where to go from here?
In typical mysql fashion, uncorrelated subqueries are re-written as correlated ones when you use an IN clause.

e.g.
code:
SELECT su.ID
FROM search_users su
WHERE su.Theme IN 
    (SELECT t.ID FROM themes t WHERE t.non_nude = 0)
LIMIT 10;
is changed to
code:
SELECT su.ID
FROM search_users su
WHERE EXISTS 
    (SELECT 1 
     FROM themes t 
     WHERE t.non_nude = 0 
     AND t.ID = su.Theme)
LIMIT 10;
So, don't use mysql.
If you must use mysql, don't use subqueries with IN.

Rocko Bonaparte
Mar 12, 2002

Every day is Friday!
Well who would have thought I'd have problems doing queries on a database with 9 million rows. This is that table of stock data going back to ~1983 or so I mentioned awhile ago. I seem to be getting JVM crashes when I try to query some of the data. The MySQL Connector JDBC driver seems to be the main point of failure, but could I really blame it? For both a Sun 1.5 and an IBM 1.6 JVM (both 64-bit) I was crashing trying to query about a year's worth of data. I don't get much indication as to what's really wrong so I don't know what's going to fix it.

I am wondering if I should split the table up into a table per symbol. Say I have about 2,600 symbols, so 2,600 tables, and I guess a master table to index into that. I'd have to go from table to table and pull out the date span I want, but would that likely be more stable? Would it be faster? I wonder if I'm running into a big-o scaling problem with such a large set. I would assume the database is trying to hash under the hood to making the access times constant, but I'm no expert.

I thought I'd ask if splitting up into sub tables has its performance merits before I throw a lot of code into it.

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Rocko Bonaparte posted:

Well who would have thought I'd have problems doing queries on a database with 9 million rows. This is that table of stock data going back to ~1983 or so I mentioned awhile ago. I seem to be getting JVM crashes when I try to query some of the data. The MySQL Connector JDBC driver seems to be the main point of failure, but could I really blame it? For both a Sun 1.5 and an IBM 1.6 JVM (both 64-bit) I was crashing trying to query about a year's worth of data. I don't get much indication as to what's really wrong so I don't know what's going to fix it.

I am wondering if I should split the table up into a table per symbol. Say I have about 2,600 symbols, so 2,600 tables, and I guess a master table to index into that. I'd have to go from table to table and pull out the date span I want, but would that likely be more stable? Would it be faster? I wonder if I'm running into a big-o scaling problem with such a large set. I would assume the database is trying to hash under the hood to making the access times constant, but I'm no expert.

I thought I'd ask if splitting up into sub tables has its performance merits before I throw a lot of code into it.
I am not a DBA, but if you're going to be doing queries that return data for multiple symbols I would not be surprised if the performance is a lot worse if you split up the data.

I'd say your first course of action should be to do a lot more testing/research to see what the cause of the crash is. You say the driver crashes when you try to retrieve a year's worth of data. Is 6 months ok? 9 months? Are your queries only returning the data you need, or are you returning all the data and then doing filtering after the fact in the application? When you say "I don't get much indication as to what's really wrong" does that mean you aren't getting any error messages/exceptions, or does that mean you are getting a message that doesn't mean anything to you personally?

Xae
Jan 19, 2005

Anyone have ideas on how to pull out data in a denormalized fashion?

I have two tables, they are related as a many to one. I want it to come out as


"Parent.Column1 | Parent.Column2 | Child.Column1-A,Child.Column1-B...Child.Column1-N | Parent.Column3"

Victor
Jun 18, 2004
Some RDBMSes allow you to do this easily. For example, SQL2005 has the pivot operator.

Xae
Jan 19, 2005

Victor posted:

Some RDBMSes allow you to do this easily. For example, SQL2005 has the pivot operator.

Thanks for the tip, I'll have to check out the docs and see if Teradata supports it. Oh god, another night digging through Teradata docs :suicide:.

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

Victor posted:

Some RDBMSes allow you to do this easily. For example, SQL2005 has the pivot operator.
To expand on this, if you don't have a pivot operator, you usually do it with GROUP BY and CASE statements.

Green_Machine
Jun 28, 2008
I need to write a query that has to run in a tight loop in my application, so I want to make it efficient.

Is there any significant difference in efficiency between IN versus EXISTS? What about NOT IN or NOT EXISTS?

I'm using the MS SQL CE engine, but I would be interested to hear if there's an efficiency difference in MS SQL Express or MS SQL Server as well.

functional
Feb 12, 2008

Reading PostgreSQL Gotchas, came across this neat query:

code:
 SELECT id, is_true
    FROM (SELECT id, RANDOM() < 0.5 AS is_true FROM some_table OFFSET 0) AS t_tmp
   WHERE is_true;
The gotcha is that the "OFFSET 0" part is necessary.

Victor
Jun 18, 2004

Jethro posted:

To expand on this, if you don't have a pivot operator, you usually do it with GROUP BY and CASE statements.
That would probably actually be a good way to explain the pivot operator...

Xae
Jan 19, 2005

Victor posted:

That would probably actually be a good way to explain the pivot operator...

The problem is that there are '...N' possible values. We kicked some ideas around, but we couldn't figure out a way to flatten a child table into one column in the script. We'll have to mess with the ETL side of the system, or just say 'gently caress it'.

fucktart
Oct 25, 2004
Fuck you, bleachtcc
This question is probably in the 95th percentile of stupidity for this thread... anyway... I have a MySQL database of wireless networks that currently has about 3000 rows but is expected to grow a lot. I'm using the following abortion of a query to check for networks that have the same MAC address:
code:
SELECT * FROM networks
WHERE bssid IN
(
 SELECT bssid FROM
 (
  SELECT * FROM networks
  GROUP BY bssid, type, essid
 ) AS tmp
 GROUP BY bssid
 HAVING COUNT(*)>1
)
ORDER BY bssid, essid ASC
This isn't being used in an application, just for testing. I want to select rows that have the same bssid but different values for essid and type. But what's a better way to do this? It takes about 20 seconds to run and return 13 rows, then subsequent runs seem to be cached.

Bad Titty Puker
Nov 3, 2007
Soiled Meat

fucktart posted:

This question is probably in the 95th percentile of stupidity for this thread... anyway... I have a MySQL database of wireless networks that currently has about 3000 rows but is expected to grow a lot. I'm using the following abortion of a query to check for networks that have the same MAC address:
code:
SELECT * FROM networks
WHERE bssid IN
(
 SELECT bssid FROM
 (
  SELECT * FROM networks
  GROUP BY bssid, type, essid
 ) AS tmp
 GROUP BY bssid
 HAVING COUNT(*)>1
)
ORDER BY bssid, essid ASC
This isn't being used in an application, just for testing. I want to select rows that have the same bssid but different values for essid and type. But what's a better way to do this? It takes about 20 seconds to run and return 13 rows, then subsequent runs seem to be cached.

Hm, interesting. Can you post DDL including primary key and indexes on "networks"?

This is untested - not sure if it works. It should outperform the query in the OP but the indexes will make a difference.

code:
SELECT 
   n.bssid, <col2>, <... coln>    /* avoid using SELECT * in production code... */
FROM
   networks n
WHERE EXISTS
   (SELECT *    /* ... but inner queries of EXISTS clause can use SELECT * */
    FROM networks n2
    WHERE n2.bssid = n.bssid
    AND (n2.type <> n.type  OR  n2.essid <> n.essid) )

Bad Titty Puker fucked around with this message at 08:29 on Jan 18, 2009

fucktart
Oct 25, 2004
Fuck you, bleachtcc

camels posted:

Hm, interesting. Can you post DDL including primary key and indexes on "networks"?
code:
CREATE TABLE `networks` (
  `id` int(10) unsigned NOT NULL, /* auto increment */
  `upload_ip` varchar(20) NOT NULL,
  `upload_time` bigint(20) unsigned NOT NULL,
  `upload_fileno` tinyint(4) NOT NULL,
  `ignore_this` tinyint(4) NOT NULL default '0'
  `bssid` varchar(20) NOT NULL,
  `essid` varchar(35) NOT NULL,
  `type` varchar(50) NOT NULL,
  `signal` smallint(6) NOT NULL,
  `freq` decimal(5,4) NOT NULL,
  `channel` tinyint(4) NOT NULL,
  `lat` decimal(13,8) NOT NULL,
  `lon` decimal(13,8) NOT NULL,
  `time` bigint(20) unsigned NOT NULL,
  `speed` decimal(10,6) NOT NULL,
  `bearing` decimal(13,8) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `bssid` (`bssid`)
);

camels posted:

This is untested - not sure if it works. It should outperform the query in the OP but the indexes will make a difference.
It didn't outperform my query noticeably (maybe by a few seconds)... until I added an index on bssid. Not sure why I didn't think to do that before. But now mine takes about the same time and yours takes .2 sec. Thanks.

fucktart fucked around with this message at 16:57 on Jan 18, 2009

Bad Titty Puker
Nov 3, 2007
Soiled Meat

fucktart posted:

code:
CREATE TABLE `networks` (
  `id` int(10) unsigned NOT NULL, /* auto increment */
  `upload_ip` varchar(20) NOT NULL,
  `upload_time` bigint(20) unsigned NOT NULL,
  `upload_fileno` tinyint(4) NOT NULL,
  `ignore_this` tinyint(4) NOT NULL default '0'
  `bssid` varchar(20) NOT NULL,
  `essid` varchar(35) NOT NULL,
  `type` varchar(50) NOT NULL,
  `signal` smallint(6) NOT NULL,
  `freq` decimal(5,4) NOT NULL,
  `channel` tinyint(4) NOT NULL,
  `lat` decimal(13,8) NOT NULL,
  `lon` decimal(13,8) NOT NULL,
  `time` bigint(20) unsigned NOT NULL,
  `speed` decimal(10,6) NOT NULL,
  `bearing` decimal(13,8) NOT NULL,
  PRIMARY KEY  (`id`),
  KEY `bssid` (`bssid`)
);
It didn't outperform my query noticeably (maybe by a few seconds)... until I added an index on bssid. Not sure why I didn't think to do that before. But now mine takes about the same time and yours takes .2 sec. Thanks.

Cool

Victor
Jun 18, 2004

Xae posted:

The problem is that there are '...N' possible values. We kicked some ideas around, but we couldn't figure out a way to flatten a child table into one column in the script. We'll have to mess with the ETL side of the system, or just say 'gently caress it'.
Yeah, so you code-gen the TSQL...

Xae
Jan 19, 2005

Victor posted:

Yeah, so you code-gen the TSQL...

Yea, if I could use a procedural language it would be easy to do with a stored Proc. But the Support Team has a pretty steep charge to put in a stored procedure, and no one wants to pay it.

I ended up just telling them it was impossible unless they wanted to pay. The joys of internal billing.

Bad Titty Puker
Nov 3, 2007
Soiled Meat

Green_Machine posted:

I need to write a query that has to run in a tight loop in my application, so I want to make it efficient.

Is there any significant difference in efficiency between IN versus EXISTS? What about NOT IN or NOT EXISTS?

I'm using the MS SQL CE engine, but I would be interested to hear if there's an efficiency difference in MS SQL Express or MS SQL Server as well.

Not sure if this is applicable, but would it be possible to retrieve a bigger result set to replace at least some of the repeated querying in a loop?

IN vs. EXISTS- it depends. If the number of rows in the subquery is small, IN can outperform EXISTS. EXISTS will often outperform IN when the subquery returns more rows, because it can stop as soon as it finds a match. But, as usual, it depends on the data, indexes, and table structures.

Victor
Jun 18, 2004

Xae posted:

Yea, if I could use a procedural language it would be easy to do with a stored Proc. But the Support Team has a pretty steep charge to put in a stored procedure, and no one wants to pay it.
sp_executesql

Xae
Jan 19, 2005

Victor posted:

sp_executesql

That is sort of nifty, I'll have to keep that in mind for the SQL Server systems, I don't see a Teradata equivalent though.

Bad Titty Puker
Nov 3, 2007
Soiled Meat

Xae posted:

Anyone have ideas on how to pull out data in a denormalized fashion?

I have two tables, they are related as a many to one. I want it to come out as


"Parent.Column1 | Parent.Column2 | Child.Column1-A,Child.Column1-B...Child.Column1-N | Parent.Column3"

On the off chance that this might be feasible - if the result set is not too large, would it be possible to do the pivoting on the client side, with a reporting tool?

Xae
Jan 19, 2005

camels posted:

On the off chance that this might be feasible - if the result set is not too large, would it be possible to do the pivoting on the client side, with a reporting tool?

The result set is in the tens of millions of rows and dumps to flat files.

It is defiantly possible to do, and in normal circumstances that is probably how I would do it. But right now budgets are getting slashed, so things like this that are nice to have but are expensive aren't being funded.

glint
Nov 15, 2004

Dear god, The patient's best intentions have sadly faltered.
I want to write an sql script which I call like

sqlcmd -S server\SQL2005 -E -i"c:\support\loads.sql" -o "V:\ncap\SQLServer.sql"

I want my loads.sql to first dump the PersonID from dname.dname_person into a temporary table

which I think I do like

code:
SELECT dname.dname_person.person_id INTO #perftest
FROM dname.dname_person
then I want to call the procedure below once for each person ID I get from the command above

quote:

exec dname.sta_pix_EditInformation_GetDetails_705 @PersonID=NNNN,@effective_date='Jan 19 2009 12:00:00:000AM',@excludePhoto=1

where NNNN is my PersonID

I'm clueless with SQL but I could resort to dumping the first part to text then using a FOR loop but would prefer to do it in the sql, can anyone help?

Victor
Jun 18, 2004
MSSQL is gay in that you have to use cursors if you want to run a sproc against each entry in a relation (table). SQL2008 will allow passing tables to sprocs, which should help alleviate this problem, although it will make things more complex for when you only want to operate on one value...

cannibustacap
Jul 7, 2003

Brrrruuuuuiinnssss
I have XAMPP set up with MYSQL working and everything, but I'd like to only deal with GZIP'd sql files. I have a database that is 2GB's large, but its SQL counterpart is closer to 100Mbs.


Anyway, I read online that you can use the command:

C:\xampp\mysql\bin>gunzip < local_backup_20090120_newadmin.sql.gz | mysql -r root my_database

But I consistently get
'gunzip' is not recognized as an internal or external command, operable program or batch file.

out of it.

So, how do I enable my XAMPP 1.70's MYSQL to allow the use of GZIP?

EDIT: just fixed some formatting

cannibustacap fucked around with this message at 08:23 on Jan 21, 2009

oversteer
Jun 6, 2005

cannibustacap posted:


C:\xampp\mysql\bin>gunzip < local_backup_20090120_newadmin.sql.gz | mysql -r root my_database

Can you use pipes and redirects like that on Windows?

I'd try extracting the .gz file using something (gunzip, WinRAR etc) then do

type local_backup_20090120 | mysql blah blah

Adbot
ADBOT LOVES YOU

pikes
Aug 9, 2002
:o
I have a sql 2005 issue that is very strange. It has to do with index fragmentation. Here's the story:

First of all, I want to preface this with the fact that I'm not officially a DBA here (just a support guy) and we have no DBA. There's no organization with maintenance routines on the server or anything right now either but I digress.

We were having issues with slowness so I checked the indexes on the table in question and found that the clustered index on the primary key (which is a GUID [yes, I know the indexing issues with using guids as primary keys and indexing]) was fragmented at 99%. I rebuilt the index which sped everything up so everyone went on their merry way.

Later, we had the same issue. Checked the index and it was back at 99% fragmented. I rebuilt it again and set up a job to automatically rebuild the index every night. It takes about 7 minutes to rebuild so it's not an issue doing it nightly.

Had the same issue AGAIN! The index was 99% fragmented AGAIN even though the job ran successfully the night before (it sends me emails letting me know the job completes). I rebuilt it and turned off the job so I could watch how fragmented the index gets from day to day. The fragmentation level been growing at like 1-2% per day.

We had slowness on another table yesterday that was linked to another index fragmentation problem. The query pulls 2 different fields in the where clause so I looked at those indexes and one of them was at 99%, and the other at like 30%ish. I rebuilt both indexes last night (checking the fragmentation levels after rebuilding to make sure they were both at 0%) thinking everything would be fine. This morning we had the same slowness on that table, and the index that was at 99% fragmentation last night (before I rebuilt) was back at 99%. This table has absolutely no inserts/updates/deletes done to it except for on Monday nights.

What the heck could cause my indexes to shoot up to 99% in one night? It's not the backups because the first index I talked about is still doing well whereas the second one shot up from 0% to 99% in one night. Is there any way to monitor what kind of queries are hitting your database? We don't have any sort of audit tables set up right now so I can't check that, but would that be the only way?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply