Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rumsfoord
Jan 29, 2005

Eat or Be Eaten

Rakshas posted:

I have two tables - the first one has a bunch of company information and user#s. The second table is strange, as it has a bunch of rows I need formatted as columns in the first table. I need one query to rule them all. I'd like to run the query and get something that resembles the Result Table.

I am being kind of lazy tonight, and I don't have a book that I wanted to look this up in (that I know has this topic as I've used it before).

The book is SQL Cookbook by Anthony Molinaro. It is kind of nice because it lays out solutions to problems like this in SQL Server/MySQL/DB2 & postgres side by side.

I'll try to remember to read that section again tomorrow to refresh my memory, and I'll post something tomorrow.

Anyway, if the columns are always going to be fixed, something like this might work:
http://saveabend.blogspot.com/2006/10/mysql-crosstabs.html

Adbot
ADBOT LOVES YOU

<deleted user>

da keebsta knicca posted:

I have deleted a bunch of indexes and cleared everything out still no go with this simplified SQL.

code:
SELECT *
FROM job_postings
WHERE MATCH (company_desc,website,position_desc,qualification,employer)
        AGAINST ('html');

And how many rows of data are in the table you are running the query against? Make sure you have enough data in the table that MySQL isn't "optimizing away" your results because of the 50% rule. From here:

quote:

A word that matches half of the rows in a table is less likely to locate relevant documents. In fact, it most likely finds plenty of irrelevant documents. We all know this happens far too often when we are trying to find something on the Internet with a search engine. It is with this reasoning that rows containing the word are assigned a low semantic value for the particular data set in which they occur. A given word may reach the 50% threshold in one data set but not another.

The 50% threshold has a significant implication when you first try full-text searching to see how it works: If you create a table and insert only one or two rows of text into it, every word in the text occurs in at least 50% of the rows. As a result, no search returns any results. Be sure to insert at least three rows, and preferably many more. Users who need to bypass the 50% limitation can use the boolean search mode...

If you're sure you have varied data inserted, run "REPAIR TABLE `job_postings`". Then run a query with EXPLAIN and a suitably-unique word for the AGAINST clause:

code:
mysql> explain select ID from job_postings WHERE match(company_desc,website,position_desc,qualification,employer) against('lawn');
+----+-------------+--------------+----------+---------------+--------------+---------+------+------+-------------+
| id | select_type | table        | type     | possible_keys | key          | key_len | ref  | rows | Extra       |
+----+-------------+--------------+----------+---------------+--------------+---------+------+------+-------------+
|  1 | SIMPLE      | job_postings | [b]fulltext[/b] | company_desc  | company_desc |       0 |      |    1 | Using where |
+----+-------------+--------------+----------+---------------+--------------+---------+------+------+-------------+
1 row in set (0.00 sec)
You're looking for "fulltext" listed in the type column.

Here's what I put in my table for the test:

code:
mysql> select company_desc from job_postings;
+--------------------------------------------------------------+
| company_desc                                                 |
+--------------------------------------------------------------+
| burgeoning bosoms of steel magnolias are on the lawn gnome   |
| cornish hens are attacking my lawn gnomes down by the casino |
| cost plus world market is a sham                             |
| a bucket is a fuckit                                         |
| large marge sent us                                          |
| paintball champions of the world                             |
+--------------------------------------------------------------+
6 rows in set (0.00 sec)
Here's my fulltext result:

code:
mysql> select company_desc from job_postings WHERE match(company_desc,website,position_desc,qualification,employer) against('lawn');
+--------------------------------------------------------------+
| company_desc                                                 |
+--------------------------------------------------------------+
| burgeoning bosoms of steel magnolias are on the lawn gnome   |
| cornish hens are attacking my lawn gnomes down by the casino |
+--------------------------------------------------------------+
2 rows in set (0.00 sec)

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
I wanna add the ability to search my site and I'm wondering if I should even bother with MySQL's fulltext after hearing so many bad things about it. What are some alternatives I should look at that I can use with PHP?

Aredna
Mar 17, 2007
Nap Ghost
I've got two questions, one SQL related, and the other kinda SQL related.

First: Does anyone have experience with Greenplum? I haven't been able to find any good resources for using it efficiently. Amazon has no books and I haven't been able to find any sites on the topic. We're starting to transition to it at work for reporting (from SQL Server) and some things are much faster, but the design philosophy behind some of the queries seems different than how I would do it on SQL Server. Any help at all would be very useful.

Second: Does anyone have experience automating SQL query parameters from an e-mail? The e-mail is in a set format every day and is just plain text so a number of tools could be used to parse out the data. The problem is that I have no idea how to go taking the e-mail from when I receive it in Outlook and getting it to the SQL server to run.

TheReverend
Jun 21, 2005

Alright I have a design issue.

Suppose I want to record locations like "123 East Street" or "685 Candy Cane Lane". Well those are simple enough to store in a table called "Addresses"

But each address could have zero or more different objects like "house", "mailbox","garage","pond","barn","shed",etc... And each object has its own independent attributes like "size","color","number of cars","livestock", etc. The attributes of one object are not necessarily the same as another.

My problem is , how do I record all of these objects and their attributes into a SQL table without having a million tables or without having a million columns.


Thanks!

noonches
Dec 5, 2005

It represents my lifestyle and stratus as a street savy irreverent youth, who lives large, and yet hungers for the next level in life.
You could make a linked table that is an EAV model. Your description sounds pretty much like it falls in the one scenario where you'd want to implement that.

npe
Oct 15, 2004
I've been on the defensive end of a few EAV fights, but I'll always say that this is not the case to use it. If you can model your relationships as tables, then do it. There's no reason to fear having a lot of tables, that's exactly what the database is there for.

EAV gains traction in cases where the application has to be designed to allow dynamic run-time modification of the relationships by the users on a constant basis. I don't want to get dragged into this argument again but that's where at least some people will justify using it.

If you can model your relationships statically, it's always the way to go.

Edited to add: the reason to hate on this so much is that if you're just going with a "dynamic" model to avoid having to manage all of these relationships, it's just self-delusion. These relationships need to be defined and managed somewhere and if you don't do it via your schema design, you'll end up doing it either in the table data or in your application code. It will end up being more work, more prone to problems, and harder to version control, and piss off everyone who has to work on it later.

npe fucked around with this message at 21:04 on Apr 9, 2008

TheReverend
Jun 21, 2005

For what it is worth, this will run on a PDA and the user will supply their own list of objects and attributes.

da keebsta knicca
Sep 12, 2000

Oh Ruutu, you are such a card.

genericadmin posted:

Helping me..


EDIT: Creating a new table worked.. thanks everyone

da keebsta knicca fucked around with this message at 23:47 on Apr 9, 2008

Rumsfoord
Jan 29, 2005

Eat or Be Eaten

yaoi prophet posted:

I've been on the defensive end of a few EAV fights, but I'll always say that this is not the case to use it. If you can model your relationships as tables, then do it. There's no reason to fear having a lot of tables, that's exactly what the database is there for.

EAV gains traction in cases where the application has to be designed to allow dynamic run-time modification of the relationships by the users on a constant basis. I don't want to get dragged into this argument again but that's where at least some people will justify using it.

If you can model your relationships statically, it's always the way to go.

Edited to add: the reason to hate on this so much is that if you're just going with a "dynamic" model to avoid having to manage all of these relationships, it's just self-delusion. These relationships need to be defined and managed somewhere and if you don't do it via your schema design, you'll end up doing it either in the table data or in your application code. It will end up being more work, more prone to problems, and harder to version control, and piss off everyone who has to work on it later.


I also hate EAV. We have a system that uses it at work and it is a hosed up piece of poo poo. It is an afterthought to code constraints -- think of a house that has 4 address attributes that are all active -- this happens in EAV if the coders aren't careful. And the coders will never be careful enough. Which loving address is correct? We made an arbitrary rule at work to use the minimum active attribute of a given type.. but this is just loving bullshit.

Data access is also slow because usually you will have some functions that you pass in a class id, entity id, and type id. Most functions blow away indexes.

Stay away unless it is the only option.

Stephen
Feb 6, 2004

Stoned
I am trying to query a table with almost a million rows and the results are running slow as hell. I created an index on the datetime column 'view_date', however my query now returns no rows at all!

code:
explain SELECT views.auto_id, COUNT( views.id ) AS search_views
FROM views
WHERE views.view_type_id =1
AND views.view_date
BETWEEN "2007-04-01 00:00:00" 
AND "2008-04-31 23:59:59" 
GROUP BY views.auto_id
id select_type table type possible_keys key key_len ref rows Extra
1 SIMPLE NULL NULL NULL NULL NULL NULL NULL Impossible WHERE noticed after reading const table...

Is the result I get when I explain the query.. can anyone tell me why my datetime index is failing?

stack
Nov 28, 2000
How can I make several tables keep their timestamps in sync with a central watch table?
code:
-- example trimmed down tables
CREATE TABLE `watch` (
 `server` char(20) NOT NULL,
 `updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`server`)
) Engine = InnoDB;

CREATE TABLE `email` (
 `server` char(20) NOT NULL,
 `updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) Engine = InnoDB;

-- other tables along the email table
My goal is to refresh the updated timestamp in watch when the settings in email,... where server=server is true and the non-watch table is updated.

Can this be done with foreign keys or do I need to use a trigger?

Jethro
Jun 1, 2000

I was raised on the dairy, Bitch!

stack posted:

How can I make several tables keep their timestamps in sync with a central watch table?
code:
-- example trimmed down tables
CREATE TABLE `watch` (
 `server` char(20) NOT NULL,
 `updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`server`)
) Engine = InnoDB;

CREATE TABLE `email` (
 `server` char(20) NOT NULL,
 `updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP
) Engine = InnoDB;

-- other tables along the email table
My goal is to refresh the updated timestamp in watch when the settings in email,... where server=server is true and the non-watch table is updated.

Can this be done with foreign keys or do I need to use a trigger?
I think the best way to do it be to normalized your data by getting rid of the `updated` field in all the tables except `watch`, and just keep that one column in one table up to date.

EDIT: But yes, if you insist on doing it the other way (or are unable to change how things work) I'm pretty sure the only way to do it automagically would be to use a trigger.

TheReverend
Jun 21, 2005

Okay so instead of a EAV design, I'm thinking about this:

code:
LocationID     ObjectID      ParentID      Name      Value

1                 5             5         House        ''    
1                 6             5         Color       green
1                 7             7          Car         ''
1                 8             7         Make        Lincoln      
The attributes and objects are all in one table. Each row references the LocationID. Each row has its own ObjectID. Each row also has a "ParentID' so we know what attributes are associated with what objects. And rows where the parent ID is the same as the objectID are objects and not attributes.

Is this a really lovely idea? I don't have a lot of database training/education. I work mostly with applications but every now and then I have to worry about database stuff so I appreciate the help.

Reminder that this will run on a PDA. Also, there really isn't a whole of "reading" going on. It is mostly writing (data collection).

noonches
Dec 5, 2005

It represents my lifestyle and stratus as a street savy irreverent youth, who lives large, and yet hungers for the next level in life.
Maybe I'm lost in the definition somewhere, but isn't that essentially an EAV model?

npe
Oct 15, 2004
Yes, that's EAV.

If you need dynamic modification of your model at run time, you essentially have two options:

(1) modify your literal schema dynamically using the data definition language (create/alter tables, basically) called from the application code
(2) use an EAV approach and basically implement a database on top of your database as you are doing with your example

They are both lovely approaches. Which one is worse is going to depend on the needs of your application, the platform you are on, and probably your own biases.

Actually, the third option that is advocated by Oracle guru Tom Kyte is to create as many columns as a table will allow, and name them something like FIELD_001 through FIELD_999 or what have you. Then you just deal with your relationships that way, somehow. I'm a huge fan of his but I have to admit this is pretty :downs:

npe fucked around with this message at 18:59 on Apr 10, 2008

chocojosh
Jun 9, 2007

D00D.

Aredna posted:

Second: Does anyone have experience automating SQL query parameters from an e-mail? The e-mail is in a set format every day and is just plain text so a number of tools could be used to parse out the data. The problem is that I have no idea how to go taking the e-mail from when I receive it in Outlook and getting it to the SQL server to run.

I'd recommend making a C# console application. The C# application would query your mail server, get the e-mail body text, parse it, and then make the appropriate queries in the database.

I have never actually accessed e-mail from C#, but I am pretty sure that there must be some way to do it. You could always ask on the .NET megaquestion thread. Accessing sql server from C# is fairly trivial (where I work we use the wonderful SqlHelper class to wrap low-level details -- see the article here: http://aspnet.4guysfromrolla.com/articles/062503-1.aspx).

Since the application will compile and result in an exe, it can be set to run every day through windows scheduled task manager.

zapateria
Feb 16, 2003
Assume the following table:

code:
+-------+-------+
|parent | child |
+-------+-------+
|1      |      2|
|2      |      3|
|2      |      4|
|3      |      5|
+-------+-------+

Which would be a tree like this:

              1
              |
              2
             / \
            3   4
            |
            5
In Oracle, you can use the following query:

code:
select child from table
start with parent = 2
connect by prior child = parent
to get the result

3
5
4

which are every "child" with 2 as "parent".

Now the question(s):

- What is the simplest way to do this in MySQL and not use code?
- What is the simplest way to do this with code (PHP)?
- What is fastest? Do this in SQL or recursive code?

My googling only resulted in stuff like "MySQL will have this function soon" and "you should use some kind of hierarchic structure in your table with references to left and right nodes".

functional
Feb 12, 2008

zapateria posted:

- What is the simplest way to do this in MySQL and not use code?
- What is the simplest way to do this with code (PHP)?
- What is fastest? Do this in SQL or recursive code?

Your naming convention is bad ('parent' and 'child'). The way you have it the first element should be called 'node' and the second should be called 'child'. Actually, this is usually a bad way to store trees, since you can have multiple children but not multiple parents, so probably what you want to do is use a NODE|PARENT format, with a PARENT of 0 indicating a root level node. This is how it's usually done. It's referred to as the adjacency list method.

You should not attempt to reconstruct a tree using an adjacency list entirely in MySQL. It's slow and the code is silly.

It's much faster to do it in PHP. Here's an overview of how. Select all the nodes in the tree. Create two node lists: one sorted by node and the other by parent. For each node construct a CHILDREN array. Then you can move a pointer/index through each of the two lists. At each *node, increment through the parents and add matching entries. Whenever *parent is greater than *node, increment node and continue. Stop whenever you hit the bottom of either list. This method is linear in the number of nodes in the tree. Then you have a traditional tree with a CHILDREN array, and you can perform traditional tree functions on it.

Alternatively, if you don't need all this tree functionality, and you just need a list of children, you can treat it like a connectivity problem in a directed graph. Select everything out and use the arrays to trace back and color everything connected to '2'.

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

zapateria posted:

My googling only resulted in stuff like "MySQL will have this function soon" and "you should use some kind of hierarchic structure in your table with references to left and right nodes".

The left and right nodes refer to the nested set model which has slow inserts but really fast reads. With that model you will also be able to get your entire tree with a single SQL statement and no recursive loops.

To go that route you will need to get a set of queries for reading the tree or a subset of the tree and also to insert records. A good book on that, that will have all the SQL you need, is a by a guy named Joe Celko.

http://www.amazon.com/Hierarchies-Smarties-Kaufmann-Management-Systems/dp/1558609202/ref=pd_bbs_sr_3?ie=UTF8&s=books&qid=1207945269&sr=8-3

That book also covers the adjacency model and also probably how you have it set up.

To solve the problem you are currently facing, you can either do a recursive solution, or if the data is of a fixed depth you can do multiple left joins. Like if the deepest child is 4 deep you would do 3 left joins back to the same table (or something like that, its been awhile since I have done it).

Stabby McDamage
Dec 11, 2005

Doctor Rope
I'm using DBI with Perl against MySQL. I know Perl inside and out, but have very little database experience.

I want to iterate all rows in a very large table (millions of rows). Hence:

code:
my $sql_pagelinks = $dbh->prepare("SELECT * FROM pagelinks");

print "Starting to execute\n";
$sql_pagelinks->execute() or sqldie();
print "Execute done, starting to iterate results\n";

while (my $row = $sql_pagelinks->fetchrow_hashref) {
	print "I found a row\n";
}
When I run it, it stops after printing "Starting to execute", indicating that it's spending time doing the execute(). During this time, perl balloons in memory usage, as if it was streaming in the ENTIRE result (many gigabytes). I abort before I run out of RAM.

I thought that the "prepare, execute, fetch, fetch, fetch, ..." paradigm meant that the database would get ready to start streaming results at execute(), then walk a cursor through the results with each fetch(). Instead, it seems like execute() wants to collect all the results, and fetch() will just iterate them locally.

Am I doing something wrong?

Do I need to use LIMIT to get a subset, and wrap it in a larger loop to walk the LIMIT through the table? That would seem to defeat the purpose of having a database...

<deleted user>

Stabby McDamage posted:

I'm using DBI with Perl against MySQL. I know Perl inside and out, but have very little database experience.

I want to iterate all rows in a very large table (millions of rows). Hence:

code:
my $sql_pagelinks = $dbh->prepare("SELECT * FROM pagelinks");

print "Starting to execute\n";
$sql_pagelinks->execute() or sqldie();
print "Execute done, starting to iterate results\n";

while (my $row = $sql_pagelinks->fetchrow_hashref) {
	print "I found a row\n";
}
When I run it, it stops after printing "Starting to execute", indicating that it's spending time doing the execute(). During this time, perl balloons in memory usage, as if it was streaming in the ENTIRE result (many gigabytes). I abort before I run out of RAM.

I thought that the "prepare, execute, fetch, fetch, fetch, ..." paradigm meant that the database would get ready to start streaming results at execute(), then walk a cursor through the results with each fetch(). Instead, it seems like execute() wants to collect all the results, and fetch() will just iterate them locally.

Am I doing something wrong?

Do I need to use LIMIT to get a subset, and wrap it in a larger loop to walk the LIMIT through the table? That would seem to defeat the purpose of having a database...

The result set is being buffered by DBD::MySQL, like you said. By default, the driver uses "mysql_store_result()" to buffer the result locally. MySQL also provides "mysql_use_result()", which works more like you described, but at a possibly serious performance hit at the server (see here).

Unless you're doing this on an isolated box where you're the only client, I would probably go with using LIMIT as you said. Just remember that for "LIMIT 5000, 20", the server actually reads rows 1-5000 before sending you the next twenty you actually want, so you might want a fully-indexed subquery involved that pulls the keys you need first and passes those to an outer query that gets the full row data.

Server-side prepare (which is probably not on by default, depending on your DBD::MySQL version) will prepare the query on the server, so that the client needs only send the parameters for the query on each execute, rather than sending the entire block of SQL + parameters and doing the query parsing step each time. It's mostly useful for bulk write operations. To use it, you need the "mysql_server_prepare=1" option included in the DSN given to connect(), but read the docs first.

Check out DBD::MySQL docs and source, and the MySQL C API docs and it demystifies a lot of things.

MySQL sucks. Enjoy. :)

Stabby McDamage
Dec 11, 2005

Doctor Rope

genericadmin posted:

(The answer)
Thank you thank you thank you. I was rapidly losing hope. I'll try that out when I get back to work.

genericadmin posted:

MySQL sucks. Enjoy. :)

I'm rapidly finding that out. It took weeks to get one table loaded because of a particular index field having massive overhead. Is Postgre any better? This is single-user data analysis -- I'm doing machine learning on a dump of wikipedia.

Xae
Jan 19, 2005

Stabby McDamage posted:

Thank you thank you thank you. I was rapidly losing hope. I'll try that out when I get back to work.


I'm rapidly finding that out. It took weeks to get one table loaded because of a particular index field having massive overhead. Is Postgre any better? This is single-user data analysis -- I'm doing machine learning on a dump of wikipedia.

Postgres tends to be faster, when using constraints or with multiple users, more reliable and have more features. MySQL tends to be a bit easier to use and setup.

zapateria
Feb 16, 2003
I have two DBS, for the time being one is Oracle and one is MySQL but I might move everything to Oracle.

OraDB has one table:

hour_vl
| key | value |

MySQLDB has one table:

maaler
| key |


The maaler table has a small (< 1000) subset of keys used in the hour_vl table
hour_vl is huge.

Right now (don't beat me up) I do the following to get the metering values from the metering equipment in maaler:

code:

MySQLDB:
array = select key from maaler [where ... the stuff i want...]
string = implode(array, ",")

OraDB:
select value from hour_vl where key in ( string )
which will produce a query to the OraDB which has a huuge "in" statement, and my fear is that I eventually will get a string overflow or whatever in the query statement. It has worked fine so far though.

I don't really want to replicate any data between the two DB's, and I have limited experience with database-links.

Is there any better way to do this?

zapateria fucked around with this message at 13:26 on Apr 13, 2008

Begby
Apr 7, 2005

Light saber? Check. Black boots? Check. Codpiece? Check. He's more machine than kid now.

zapateria posted:

Is there any better way to do this?

Yeah, you use one database.

zapateria
Feb 16, 2003

Begby posted:

Yeah, you use one database.

That's unfortunately not an option, since I only have read-access to the OraDB.

Lancer383
Aug 15, 2007

Say what you want about the tenants of national socialism, Dude, at least it was an ethos.

zapateria posted:

I have two DBS, for the time being one is Oracle and one is MySQL but I might move everything to Oracle.

OraDB has one table:

hour_vl
| key | value |

MySQLDB has one table:

maaler
| key |


The maaler table has a small (< 1000) subset of keys used in the hour_vl table
hour_vl is huge.

Right now (don't beat me up) I do the following to get the metering values from the metering equipment in maaler:

code:

MySQLDB:
array = select key from maaler [where ... the stuff i want...]
string = implode(array, ",")

OraDB:
select value from hour_vl where key in ( string )
which will produce a query to the OraDB which has a huuge "in" statement, and my fear is that I eventually will get a string overflow or whatever in the query statement. It has worked fine so far though.

I don't really want to replicate any data between the two DB's, and I have limited experience with database-links.

Is there any better way to do this?

You could write a quick little program in Python to do it -- you would need a module to connect to MySQL, and Cx_Oracle to connect to Oracle. Create a list to hold the maaler values, and then set up a loop to grab 200(?) values from Oracle at a time, and output to a CSV file.

But I think I'm giving you a Glass Bottle or Dirty Shoe answer: http://weblogs.asp.net/alex_papadimoulis/archive/2005/05/25/408925.aspx

da keebsta knicca
Sep 12, 2000

Oh Ruutu, you are such a card.
Full text search question.

I am doing this job search still, but I am having trouble with mysql returning results based on searches with terms like "C++" and "C#". I think it only searches against "C". Is there a way to have it to make sure it searches against the "++" characters also?

The problem is that I will have to make the index minimum 1 character instead of 2 characters. I didn't even want it this small, but needed to search "PHP". I guess my project is a little different from what full text would normally be used for, but I need a quick search against an already pretty complex badly designed database.

kalleboo
Jan 13, 2001

Hjälp

da keebsta knicca posted:

I am doing this job search still, but I am having trouble with mysql returning results based on searches with terms like "C++" and "C#". I think it only searches against "C". Is there a way to have it to make sure it searches against the "++" characters also?
Try adding + as a word character: http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html

da keebsta knicca
Sep 12, 2000

Oh Ruutu, you are such a card.

kalleboo posted:

Try adding + as a word character: http://dev.mysql.com/doc/refman/5.0/en/fulltext-fine-tuning.html

Ack... thanks. Requires me to have meetings with 2 different people to discuss possible implications site wide and all this junk. :(

Changing a character XML file shouldn't effect display of any of these special characters should it? and only effect searches? Right now this will be the first thing we actually use full text search with so it shouldn't be a problem right? Yet we display a lot of special characters and stuff from our databases, because the nature of us having a lot of data from international people.

da keebsta knicca fucked around with this message at 05:45 on Apr 14, 2008

<deleted user>

da keebsta knicca posted:

Ack... thanks. Requires me to have meetings with 2 different people to discuss possible implications site wide and all this junk. :(

Changing a character XML file shouldn't effect display of any of these special characters should it? and only effect searches? Right now this will be the first thing we actually use full text search with so it shouldn't be a problem right? Yet we display a lot of special characters and stuff from our databases, because the nature of us having a lot of data from international people.

It won't affect the data at all, just what characters are used to build the fulltext index. Remember that '+' is a boolean operator also, and that increasing the indexed character set will increase the index size and lower performance.

Triple Tech
Jul 28, 2006

So what, are you quitting to join Homo Explosion?
Can I do this? I know I can just make it two update statements with a where clause, but I wonder if a single update is any better.

This is SQL Server 2005. And Victor already showed me how to exclude a record from a sum using case else 0, but how do you exclude something from a count? The documentation says count all will exclude nulls, but Victor says he's never seen all used that way. How else would you use it?

code:
type,  old_balance, new_balance
a   0  12
a  12   0
b   3   3
b   5  18
c   2   0
c  13  14

select
  count(all case when old_balance = 0 then null else old_balance) as old_items,
  count(all case when new_balance = 0 then null else new_balance) as new_items
from #tableu
group by type

Aredna
Mar 17, 2007
Nap Ghost
I think this will work:

code:
sum(case when old_balance = 0 then 0 else 1 end) as old_items
edit: 2nd example removed due to being completely wrong

vvv My appologies... on occasion I try and use the equivalent of an expression from another language in SQL ><

Aredna fucked around with this message at 21:27 on Apr 14, 2008

Triple Tech
Jul 28, 2006

So what, are you quitting to join Homo Explosion?

Aredna posted:

code:
sum(max(old_balance,1)) as old_items

What does the comma 1 part do? I understand what max does, but not with multiple arguments in the expression.

StonedogJones
Mar 13, 2006
I have a text column in a table that has random comma's in it.

How can i strip the comma's out of each row in the column? I want the comma's removed, but that can be replaced with a different character if that makes it easier. This is in a MS SQL2000 database/server.

Xae
Jan 19, 2005

StonedogJones posted:

I have a text column in a table that has random comma's in it.

How can i strip the comma's out of each row in the column? I want the comma's removed, but that can be replaced with a different character if that makes it easier. This is in a MS SQL2000 database/server.

I think Replace() should do the trick

http://doc.ddart.net/mssql/sql70/ra-rz_6.htm

Muzjik
Apr 11, 2008
OK i read about 4 pages of this thread ( first 2 and last 2 ) and no body has really answered this kind of question, so here it goes.

I have a cd, SQL server 2005 and it has several versions on it ( enterprise, standard)

Im sort of new at this and iv been trying to find help on installing the CD, when i put it in and open it, it opens in internet explorer, then it has like a list of links. So i hit one of the links that look best ( standard 32 bit ) and i found the set up, then this is where i get confused

When setting it up, it says its installing office 2003 web componets or something along those lines, and then thats all. I don't know what to do from here.

Sorry for this being so l ong for such an easy question, bassically i want to install it an i am too stupid to figure out how! please help!

StonedogJones
Mar 13, 2006

Xae posted:

I think Replace() should do the trick

http://doc.ddart.net/mssql/sql70/ra-rz_6.htm

no i dont think replace will do it. the column has different text in each row, so i cant search for a string, and then replace something in that string, because the string is not constant for each row.

Adbot
ADBOT LOVES YOU

mjan
Jan 30, 2007

StonedogJones posted:

no i dont think replace will do it. the column has different text in each row, so i cant search for a string, and then replace something in that string, because the string is not constant for each row.

Xae is right, replace should do fine. Assuming the table has a unique key you could do something like the following:

select 'update my_table set text_column = ''' +
replace(replace(text_column, ',', ''), '''', '''''') + ''' where key = '
+ cast(key as varchar(100)) as cmd from
my_table



The outer replace is to escape any single quotes in the text, you can skip it if you know it's clean. Spool the output of that to a file and execute it and you should be all set.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply