|
Kumbamontu posted:You're a gentleman and a scholar. Thank you! I've dealt with that casting problem a bunch of times, too.
|
# ? Jul 21, 2015 21:24 |
|
|
# ? Jun 8, 2024 13:16 |
|
SQL (Server? or is it standard?) has this really dumb thing where if you ask it to do 'foo' + 10, it will attempt (and fail) to turn 'foo' into a number, instead of turning 10 into a string like pretty much any other language that allows type coercion would do. Kumbamontu's case is extra dumb because he also explicitly told SQL Server that he wants a varchar(max) as the result of the addition. Even if there's a legitimate reason for favouring varchar->int over the opposite - I'd be really curious to learn what it is - the interpreter ought to be able to figure out on its own that it should go the other way in this case. e: apparently one reason SQL2012 introduced CONCAT(x1, x2, ..., xn) is because that one does implicitly convert everything to string. Amusing, I'd always kind of assumed that the ugly CONCAT() was the legacy function and the '+' operator a subsequently-introduced shorthand for the former.. NihilCredo fucked around with this message at 21:56 on Jul 21, 2015 |
# ? Jul 21, 2015 21:49 |
|
NihilCredo posted:SQL (Server? or is it standard?) has this really dumb thing where if you ask it to do 'foo' + 10, it will attempt (and fail) to turn 'foo' into a number, instead of turning 10 into a string like pretty much any other language that allows type coercion would do. SQL shorthand concat is ||
|
# ? Jul 21, 2015 23:00 |
|
ljw1004 posted:SELECT * FROM Posts WHERE Title LIKE '%resources%' The postgresql and sqlite docs are excellent. Like waaaaay better than anything else out there.
|
# ? Jul 21, 2015 23:27 |
|
Is there a standard solution for supporting a dynamic query of form a OR b AND (c OR d) from some proprietary interface to SQL? It seems that some limits are required to stop forcing full table scans. Table along the lines of:code:
edit: It looks like using the MySQL family I can use the MATCH() ... AGAINST() function suite if I prepare all the codes into a FULLTEXT index per row (article_id above). Then that provides the full boolean mode used in text searches. edit 2: Is this only possible with a "search engine"? I'm looking at Sphinx and the problem appears to come down to performing boolean matching on what they term multi-valued attributes (MVAs). Sphinx appears cute as can integrate with MariaDB to be driven by SQL. http://stackoverflow.com/questions/13643114/how-do-i-search-for-mvas-with-sphinxse https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/ MrMoo fucked around with this message at 13:58 on Jul 22, 2015 |
# ? Jul 22, 2015 02:02 |
|
This is probably one of those questions that apply to the title of the thread, but here goes: I have a SQL dbo that when I click "Execute" on it gives me an updated list of information. I want to be able to do this automatically every few seconds so that way I can keep up with how far along the DB is using a screenshare. How would I go about doing the automatic updating part? Any help is greatly appreciated.
|
# ? Jul 23, 2015 17:41 |
|
Probably not what you want to hear, but when you're reloading 5/sec, you might as well just realize that a session takes less resources and there are mechanisms for this in real databases: http://www.postgresql.org/docs/9.4/static/sql-notify.html
|
# ? Jul 23, 2015 23:36 |
|
Interesting. I've opted to loop refresh set on a 5 minute delay. This is just a temporary solution until I get a second server. Thanks for the info though.
|
# ? Jul 24, 2015 05:18 |
|
This isn't an SQL question, but just a fail you guys might want to know about... I've been working on a bounceback handler, and we got a non standard delivery report (wasn't multipart, just plaintext) which contained this in the first few lines:code:
You can literally perform an SQL injection on these guys just by sending them an email. The Internet never ceases to amaze.
|
# ? Jul 24, 2015 15:17 |
|
Let's talk about Partitioning on SQL Server 2014: Let's start by saying I've never worked with partitioning before, though I understand the concepts of horizontal and vertical partitioning (I think). Right now I'm am looking at designing an OLAP database that will collect on the order of 20 million new records a day with a primary key of [timestamp, areaID]. Typically I work with large structured data sets by creating a filesystem for data files and another filesystem for indexes. I then create several files in each filesystem and create a dedicated volume on storage to serve each file, spreading the I/O load across a battery of volumes to maximize I/O. However, in this deployment, I only have a single local logical RAID array to host the data. Past experience indicates that I can still receive a performance benefit by creating multiple volumes and putting a single data (or index) file in that volume. This way I can avoid fragmentation on disk as data files grow and can use multiple streams for reading and writing data. But now I'm reading up on partitioning, and instead of creating a single file system for the data and a single file system for indexes, I can break this up into partitions on date, partitioned by year/month, or by AreaID. Can anyone speak to the pros/cons of each scenario? When partitioning by date, what happens when a new month arrives? Does the database have to be modified to add a new month partition and thus a new month partition data file? Or does it happen automatically? Similarly, if a new AreaID is added, does the database have to be repartitioned or does it somehow know to create a partition for the new area?
|
# ? Jul 25, 2015 00:50 |
|
I am using Postgresql (I have no choice in the matter) and would like to know if there is a query to count the columns in a row where a value is true. The table would look something like --------------------------- | user_id | A | B | C | D | --------------------------- | 1234567 | T | T | F | F | --------------------------- What I would like to do is count how many TRUE columns there are on the row identified by the user_id in the query.
|
# ? Jul 27, 2015 15:58 |
|
code:
|
# ? Jul 27, 2015 16:37 |
|
Sedro posted:
Perfect! This worked very well for what I wanted. Thanks a lot
|
# ? Jul 27, 2015 17:02 |
|
Could somebody give me some help as to why this is not updatingcode:
For clarity the logic is as follows. user enters a Patid and Location. Patid is unique and I just want to update the location on the Patid entered
|
# ? Jul 28, 2015 00:13 |
|
joebuddah posted:Could somebody give me some help as to why this is not updating I'm sure it probably exists further down outside of the quoted code but do you definitely have your closing '?>' for the php
|
# ? Jul 28, 2015 00:35 |
|
joebuddah posted:I am not getting any error messages but it is not updating. Nothing's immediately standing out, there. You'll want to switch to E_ALL, expressly turn on error display using ini_set('display_errors', true); and make sure you expressly tell PDO how to report errors. There's also the PHP thread to check out. EmmyOk posted:I'm sure it probably exists further down outside of the quoted code but do you definitely have your closing '?>' for the php Not only does PHP not require the closing tag at the bottom of the file, doing so can allow stray whitespace to sneak in, and that can lead to very, very irritating problems. It's best to keep the final tag unclosed.
|
# ? Jul 28, 2015 00:56 |
|
McGlockenshire posted:Nothing's immediately standing out, there. Cool I never knew that!
|
# ? Jul 28, 2015 01:02 |
|
Thanks for the help so far, I fixed the error reporting and am now getting these errors Notice: Undefined variable: dbo in /home/wrchtnoy/public_html/demos/update4.php on line 16 Fatal error: Call to a member function prepare() on a non-object in /home/wrchtnoy/public_html/demos/update4.php on line 16 this is line 16 $sql->bindParam(':Patid',$Patid); edit found the problem $sql=$dbo->prepare was supposed to $odb joebuddah fucked around with this message at 01:30 on Jul 28, 2015 |
# ? Jul 28, 2015 01:20 |
|
This has been annoying me forever. Why is it sometimes I write a query in SSMS, save the .sql. Then I come back say the following day and open up the query. All the tables and fields will have red squiggly and I'll get a ton of invalid object name errors. Then I put "USE [database name]" as the first line of the query. Now my query works again. The crazy thing is after it runs once I can remove the use statement and everything still works. What gives?
|
# ? Jul 29, 2015 15:42 |
|
When you're connecting you're not specifying the catalog in the connection string, so it's defaulting (probably) to the master catalog.
|
# ? Jul 29, 2015 15:53 |
|
Whenever you connect to the server, your connection defaults to the "master" database and you have to either include a use statement to let your query know which database you want to talk to, or you need to just change it in the menu at the top: To avoid doing either of these steps, you should get used to using 3 (or even 4 if you deal with multiple servers) part naming in your queries instead. So, if you have a server named Server1, and a database within that called DBMaster, and a table called Customers in the dbo schema within that database, after connecting to Server1 you could write: code:
code:
code:
|
# ? Jul 29, 2015 15:57 |
|
Ah yes that makes perfect sense now. Thanks guys.
|
# ? Jul 29, 2015 16:31 |
|
Kumbamontu posted:You can use 4-part naming including the server name to join databases across servers (assuming they can see each other / are linked), like so: Just be aware that like 99% of the time this is the wrong way to do such a thing. Other than occasional one-off things.
|
# ? Jul 29, 2015 16:46 |
|
No Safe Word posted:Just be aware that like 99% of the time this is the wrong way to do such a thing. Other than occasional one-off things. This is the first I've heard this - how else can you accomplish the same thing? We have several reporting servers that do various things and we have nightly processes where batches of monitoring queries are run and we've never run into a problem with the servers talking to each other. e: This post may sound snarky but it's not meant to, if we could do this in a better way I'm totally open to it! kumba fucked around with this message at 17:20 on Jul 29, 2015 |
# ? Jul 29, 2015 17:16 |
|
Kumbamontu posted:This is the first I've heard this - how else can you accomplish the same thing? We have several reporting servers that do various things and we have nightly processes where batches of monitoring queries are run and we've never run into a problem with the servers talking to each other. You do an export and import via things like SSIS. For batch stuff in off hours using linked servers isn't the worst abuse. Linked servers just introduce complexity that is generally unnecessary: permissions, network latency, schema inconsistencies, etc. Those are the sorts of things you have to manage on both ends whereas if you just have an export/import process you just have to manage that.
|
# ? Jul 29, 2015 17:33 |
|
Any help with my partitioning question? Or is this the wrong thread for this. Perhaps there is a DBA thread somewhere? Agrikk posted:Let's talk about Partitioning on SQL Server 2014:
|
# ? Jul 29, 2015 22:09 |
|
Kumbamontu posted:Whenever you connect to the server, your connection defaults to the "master" database and you have to either include a use statement to let your query know which database you want to talk to, or you need to just change it in the menu at the top: Actually, I think the initial database defaults to... the database selected as the default one in the server config. Not that the explicit DB name suggestion is incorrect in any way, but if 90% of the time everyone uses the same DB, is can be made the default one for ease of use.
|
# ? Jul 30, 2015 20:35 |
|
Agrikk posted:When partitioning by date, what happens when a new month arrives? Does the database have to be modified to add a new month partition and thus a new month partition data file? Or does it happen automatically? Here are some additional resources: https://msdn.microsoft.com/en-us/library/dd578580.aspx http://www.brentozar.com/sql/table-partitioning-resources/
|
# ? Aug 3, 2015 08:39 |
|
Using Postgres 9.3, I have a table called log with just under 10m rows defined like so:pre:Column | Type | Modifiers --------------+--------------------------+-------------------------------------------------- id | integer | not null default nextval('log_id_seq'::regclass) log_level | text | not null log_category | text | message | text | not null username | text | time_stamp | timestamp with time zone | not null company_name | text | ip_address | text | Indexes: "log_pkey" PRIMARY KEY, btree (id) "log_company_name" btree (company_name) "log_time_stamp_idx" btree (time_stamp) create index log_search_message_idx on log using gist(message gist_trgm_ops); This works great sometimes, but the results are wildly inconsistent. Here are two queries that return exactly the same set of 1664 rows: pre:explain(verbose,buffers,analyze) select *, count(*) over() as full_count from log where company_name = 'x' and (message ilike '%success%') order by time_stamp desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=5073.78..5076.68 rows=1163 width=232) (actual time=84818.881..84819.108 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, (count(*) OVER (?)) Sort Key: log.time_stamp Sort Method: quicksort Memory: 491kB Buffers: shared hit=1 read=1072075 written=9 -> WindowAgg (cost=529.57..5014.56 rows=1163 width=232) (actual time=84817.815..84818.243 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, count(*) OVER (?) Buffers: shared hit=1 read=1072075 written=9 -> Bitmap Heap Scan on log (cost=529.57..5000.02 rows=1163 width=232) (actual time=84744.119..84816.478 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address Recheck Cond: (log.message ~~* '%success%'::text) Filter: (log.company_name = 'x'::text) Buffers: shared hit=1 read=1072075 written=9 -> Bitmap Index Scan on log_search_message_idx (cost=0.00..529.28 rows=1165 width=0) (actual time=84743.042..84743.042 rows=1664 loops=1) Index Cond: (log.message ~~* '%success%'::text) Buffers: shared hit=1 read=1071286 written=9 Total runtime: 84819.782 ms pre:explain (verbose,buffers,analyze) select *, count(*) over() as full_count from log where company_name = 'x' and (message ilike '%cessful%') order by time_stamp desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=5073.78..5076.68 rows=1163 width=232) (actual time=87.739..87.957 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, (count(*) OVER (?)) Sort Key: log.time_stamp Sort Method: quicksort Memory: 491kB Buffers: shared hit=5996 read=553 -> WindowAgg (cost=529.57..5014.56 rows=1163 width=232) (actual time=86.738..87.182 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, count(*) OVER (?) Buffers: shared hit=5996 read=553 -> Bitmap Heap Scan on e911manager.log (cost=529.57..5000.02 rows=1163 width=232) (actual time=84.324..86.114 rows=1664 loops=1) Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address Recheck Cond: (log.message ~~* '%cessful%'::text) Filter: (log.company_name = 'x'::text) Buffers: shared hit=5996 read=553 -> Bitmap Index Scan on log_search_message_idx (cost=0.00..529.28 rows=1165 width=0) (actual time=84.227..84.227 rows=1664 loops=1) Index Cond: (log.message ~~* '%cessful%'::text) Buffers: shared hit=5207 read=553 Total runtime: 88.189 ms I don't understand how the index could work so well sometimes and so poorly others. Is there something I'm doing wrong here?
|
# ? Aug 4, 2015 17:42 |
|
Kilson posted:Caching isn't the reason, It is the reason. Look at how much higher Buffers: shared hit is on the fast query.
|
# ? Aug 4, 2015 18:29 |
|
Pardot posted:It is the reason. Look at how much higher Buffers: shared hit is on the fast query. What I meant was that if I do the same slow query twice in a row, it's still slow. If caching made the difference, it should work better when performing the same query as opposed to a different one. Caching isn't making the second query 1000x faster. I can completely rebuild the table and/or indices and get the same results as above for the two queries, regardless of the order I do them.
|
# ? Aug 4, 2015 18:42 |
|
Kilson posted:What I meant was that if I do the same slow query twice in a row, it's still slow. If caching made the difference, it should work better when performing the same query as opposed to a different one. Caching isn't making the second query 1000x faster. I can completely rebuild the table and/or indices and get the same results as above for the two queries, regardless of the order I do them. Oh I understand what you were saying now. If you know it's all just english, you might want to try making the column have the "C" locale, which speeds up comparisons, and also check out the trigram extension which you can create gin and/or gist indexes that work with ilike http://www.postgresql.org/docs/9.1/static/pgtrgm.html quote:Beginning in PostgreSQL 9.1, these index types also support index searches for LIKE and ILIKE, for example
|
# ? Aug 4, 2015 19:10 |
|
Pardot posted:Oh I understand what you were saying now. If you know it's all just english, you might want to try making the column have the "C" locale, which speeds up comparisons, and also check out the trigram extension which you can create gin and/or gist indexes that work with ilike http://www.postgresql.org/docs/9.1/static/pgtrgm.html I'm using gist trigram indices. That's why I don't understand how one query is so much slower than another.
|
# ? Aug 4, 2015 19:49 |
|
This may be the wrong thread, in which case I apologize. I have a SQL Server in Perth (SQL2012) on one domain, and I have a server (SQL2008R2) in Melbourne on a completely different domain, different company. I need to read some information from a database hosted on the Melbourne server. Currently, I have Windows authentication detail for both servers, and SQL accounts for both sides. I can, from my Perth server, access a remote ip in Windows explorer and see the Melbourne machine. My question is, how do I get the sql servers to talk, when everything else can?
|
# ? Aug 18, 2015 07:26 |
|
Set em up as linked servers. Pretty sure you can plug in whatever domain credentials you want
|
# ? Aug 18, 2015 23:36 |
|
Not a small question, but this is the only thread on SQL I can find. My employer currently uses google sheet to keep track of product prices, inventory and ordering. The sheets have between 200-700 items and are accessed from two other locations. I have spent last 3-4 Months self-learning and writing scripts to automate some tasks, but they break whenever someone accidentally modify the columns. The boss also wants to be able to automatically update(from excel or pdf) and compare prices. At this point I think I'm hitting the limit of what google sheet is capable of and need to move to SQL. My knowledge in databases is limited to having a course on relational database back in college. I"m currently messing around with XAMPP, but I think there's no way to avoid having to program a custom interface, as most of the users are older people with limited technical knowledge. My programming skill is limited, so I need to commit to a language to use to make the interface. I'm leaning towards php so I don't have to teach other locations how to install a software or run a program. Will I be able to create what my boss want with MYSQL and PHP? If so, what's a good way to start learning the two languages from scratch?
|
# ? Aug 19, 2015 18:21 |
|
If they know Excel, hookup the data source to a SQL backend, and have the interface through that. I wouldn't build a php front end unless you want to learn how to (and maintain it as long as you work there).
|
# ? Aug 19, 2015 18:29 |
|
Oh look, PostgreSQL has a foreign data wrapper for that too... https://github.com/lincolnturner/gspreadsheet_fdw
|
# ? Aug 19, 2015 22:58 |
|
kloa posted:If they know Excel, hookup the data source to a SQL backend, and have the interface through that. I'll explore that option since I know all of our machines have Excel. Would this method allow multiple users to access and change values on the same table like google sheet?
|
# ? Aug 20, 2015 18:05 |
|
|
# ? Jun 8, 2024 13:16 |
|
Postgres 9.4 I've got three tables like so: product: id | description tag: id | product | text attributes: id | product | key | value product in both tag and attributes references id in product. All other fields are text. I've got a view that joins the tables and results in the following table. Note that for each product I get rows equal to the number of tags times the number of key/value pairs. product.id | product.description | tag.text | ARRAY[attributes.key, attributes.value] Can I aggregate both the tags and k/v pairs (into arrays) with a single SELECT/GROUP BY statement or do I need to first do one and then the other building an intermediate table along the way?
|
# ? Aug 21, 2015 19:46 |