Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nth Doctor
Sep 7, 2010

Darkrai used Dream Eater!
It's super effective!


Kumbamontu posted:

You're a gentleman and a scholar. Thank you!

:ocelot: I've dealt with that casting problem a bunch of times, too.

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

SQL (Server? or is it standard?) has this really dumb thing where if you ask it to do 'foo' + 10, it will attempt (and fail) to turn 'foo' into a number, instead of turning 10 into a string like pretty much any other language that allows type coercion would do.

Kumbamontu's case is extra dumb because he also explicitly told SQL Server that he wants a varchar(max) as the result of the addition. Even if there's a legitimate reason for favouring varchar->int over the opposite - I'd be really curious to learn what it is - the interpreter ought to be able to figure out on its own that it should go the other way in this case.


e: apparently one reason SQL2012 introduced CONCAT(x1, x2, ..., xn) is because that one does implicitly convert everything to string. Amusing, I'd always kind of assumed that the ugly CONCAT() was the legacy function and the '+' operator a subsequently-introduced shorthand for the former..

NihilCredo fucked around with this message at 21:56 on Jul 21, 2015

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

NihilCredo posted:

SQL (Server? or is it standard?) has this really dumb thing where if you ask it to do 'foo' + 10, it will attempt (and fail) to turn 'foo' into a number, instead of turning 10 into a string like pretty much any other language that allows type coercion would do.

Kumbamontu's case is extra dumb because he also explicitly told SQL Server that he wants a varchar(max) as the result of the addition. Even if there's a legitimate reason for favouring varchar->int over the opposite - I'd be really curious to learn what it is - the interpreter ought to be able to figure out on its own that it should go the other way in this case.


e: apparently one reason SQL2012 introduced CONCAT(x1, x2, ..., xn) is because that one does implicitly convert everything to string. Amusing, I'd always kind of assumed that the ugly CONCAT() was the legacy function and the '+' operator a subsequently-introduced shorthand for the former..

SQL shorthand concat is ||

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

ljw1004 posted:

SELECT * FROM Posts WHERE Title LIKE '%resources%'

I've got as far as I can go with hacking my way through SQL, writing queries, googling and search stackoverflow for where I get stuck.

Now I want to sit down with an authoritative book and actually learn the language all-up in a more systematic and principled way -- so I no longer get surprised each time it tells me "can't have an aggregate inside a subquery" or whatever.


I couldn't find a "resources" post in this thread. Is there one? What's a good SQL book? I'm happy with language theory and computer science, and would prefer a thorough dry text over one that's practical and applied but partial.

The postgresql and sqlite docs are excellent.

Like waaaaay better than anything else out there.

MrMoo
Sep 14, 2000

Is there a standard solution for supporting a dynamic query of form a OR b AND (c OR d) from some proprietary interface to SQL? It seems that some limits are required to stop forcing full table scans. Table along the lines of:

code:
article_id |  code
-----------+-------
  100      |  one
  100      |  two
  200      |  one
  300      |  three
Find articles with dynamic queries, e.g. code = one AND (code = two OR code = three)

edit: It looks like using the MySQL family I can use the MATCH() ... AGAINST() function suite if I prepare all the codes into a FULLTEXT index per row (article_id above). Then that provides the full boolean mode used in text searches.

edit 2: Is this only possible with a "search engine"? I'm looking at Sphinx and the problem appears to come down to performing boolean matching on what they term multi-valued attributes (MVAs). Sphinx appears cute as can integrate with MariaDB to be driven by SQL.

http://stackoverflow.com/questions/13643114/how-do-i-search-for-mvas-with-sphinxse
https://mariadb.com/kb/en/mariadb/sphinx-storage-engine/

MrMoo fucked around with this message at 13:58 on Jul 22, 2015

LiterallyAnything
Jul 11, 2008

by vyelkin
This is probably one of those questions that apply to the title of the thread, but here goes:

I have a SQL dbo that when I click "Execute" on it gives me an updated list of information. I want to be able to do this automatically every few seconds so that way I can keep up with how far along the DB is using a screenshare. How would I go about doing the automatic updating part? Any help is greatly appreciated.

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Probably not what you want to hear, but when you're reloading 5/sec, you might as well just realize that a session takes less resources and there are mechanisms for this in real databases: http://www.postgresql.org/docs/9.4/static/sql-notify.html

LiterallyAnything
Jul 11, 2008

by vyelkin
Interesting. I've opted to loop refresh set on a 5 minute delay. This is just a temporary solution until I get a second server. Thanks for the info though.

Pivo
Aug 20, 2004


This isn't an SQL question, but just a fail you guys might want to know about... I've been working on a bounceback handler, and we got a non standard delivery report (wasn't multipart, just plaintext) which contained this in the first few lines:

code:
The following text was generated during the delivery attempt:

------ pipe to |/home/hosting/forwardtoticket/forwardtoticket.php
       generated by [email]support@XXXXXXXXXXX.com[/email] ------

You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for 
the right syntax to use near 's domain name(s) will be suspended shortly', 'Greetings,

Please read this impor' at line 2
------ This is a copy of the message, including all the headers. ------
Do you see what is happening here? Whatever they use to automate mail handling is choking on single quotes in our message.

You can literally perform an SQL injection on these guys just by sending them an email. The Internet never ceases to amaze.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Let's talk about Partitioning on SQL Server 2014:

Let's start by saying I've never worked with partitioning before, though I understand the concepts of horizontal and vertical partitioning (I think).

Right now I'm am looking at designing an OLAP database that will collect on the order of 20 million new records a day with a primary key of [timestamp, areaID]. Typically I work with large structured data sets by creating a filesystem for data files and another filesystem for indexes. I then create several files in each filesystem and create a dedicated volume on storage to serve each file, spreading the I/O load across a battery of volumes to maximize I/O.

However, in this deployment, I only have a single local logical RAID array to host the data. Past experience indicates that I can still receive a performance benefit by creating multiple volumes and putting a single data (or index) file in that volume. This way I can avoid fragmentation on disk as data files grow and can use multiple streams for reading and writing data.

But now I'm reading up on partitioning, and instead of creating a single file system for the data and a single file system for indexes, I can break this up into partitions on date, partitioned by year/month, or by AreaID.

Can anyone speak to the pros/cons of each scenario?

When partitioning by date, what happens when a new month arrives? Does the database have to be modified to add a new month partition and thus a new month partition data file? Or does it happen automatically?

Similarly, if a new AreaID is added, does the database have to be repartitioned or does it somehow know to create a partition for the new area?

EmmyOk
Aug 11, 2013

I am using Postgresql (I have no choice in the matter) and would like to know if there is a query to count the columns in a row where a value is true. The table would look something like


---------------------------
| user_id | A | B | C | D |
---------------------------
| 1234567 | T | T | F | F |
---------------------------


What I would like to do is count how many TRUE columns there are on the row identified by the user_id in the query.

Sedro
Dec 31, 2008
code:
select a::integer + b::integer + c::integer + d::integer as count
from t
where user_id = ...

EmmyOk
Aug 11, 2013

Sedro posted:

code:
select a::integer + b::integer + c::integer + d::integer as count
from t
where user_id = ...

Perfect! This worked very well for what I wanted. Thanks a lot

joebuddah
Jan 30, 2005
Could somebody give me some help as to why this is not updating
code:
<?php
     $host = "localhost";
     $db = "demo";
     $user ="user";
     $pass =" ";
          
     $conn = new PDO("mysql:host=$host;name=$name",$user,$pass);
        
 /////////////Collect form data/////////////
$Patid=$_POST['Patid'];
$Location=$_POST['Location'];
///////// End of data collection ///
error_reporting(E_ERROR | E_PARSE | E_CORE_ERROR);

	$sql=$dbo->prepare("Update lobby set Location=:Location where Patid=:Patid");
$sql->bindParam(':Patid',$Patid);
$sql->bindParam(':Location',$Location);

if($sql->execute()){
echo "Successfully updated Profile";
}// End of if profile is ok
else{
print_r($sql->errorInfo()); // if any error is there it will be posted
$msg=" Database problem, please contact site admin ";
} 

I am not getting any error messages but it is not updating.

For clarity the logic is as follows. user enters a Patid and Location. Patid is unique and I just want to update the location on the Patid entered

EmmyOk
Aug 11, 2013

joebuddah posted:

Could somebody give me some help as to why this is not updating
[code]

I am not getting any error messages but it is not updating.

For clarity the logic is as follows. user enters a Patid and Location. Patid is unique and I just want to update the location on the Patid entered

I'm sure it probably exists further down outside of the quoted code but do you definitely have your closing '?>' for the php

McGlockenshire
Dec 16, 2005

GOLLOCKS!

joebuddah posted:

I am not getting any error messages but it is not updating.

Nothing's immediately standing out, there.

You'll want to switch to E_ALL, expressly turn on error display using ini_set('display_errors', true); and make sure you expressly tell PDO how to report errors. There's also the PHP thread to check out.

EmmyOk posted:

I'm sure it probably exists further down outside of the quoted code but do you definitely have your closing '?>' for the php

Not only does PHP not require the closing tag at the bottom of the file, doing so can allow stray whitespace to sneak in, and that can lead to very, very irritating problems. It's best to keep the final tag unclosed.

EmmyOk
Aug 11, 2013

McGlockenshire posted:

Nothing's immediately standing out, there.

You'll want to switch to E_ALL, expressly turn on error display using ini_set('display_errors', true); and make sure you expressly tell PDO how to report errors. There's also the PHP thread to check out.


Not only does PHP not require the closing tag at the bottom of the file, doing so can allow stray whitespace to sneak in, and that can lead to very, very irritating problems. It's best to keep the final tag unclosed.

Cool I never knew that!

joebuddah
Jan 30, 2005
Thanks for the help so far, I fixed the error reporting and am now getting these errors

Notice: Undefined variable: dbo in /home/wrchtnoy/public_html/demos/update4.php on line 16

Fatal error: Call to a member function prepare() on a non-object in /home/wrchtnoy/public_html/demos/update4.php on line 16

this is line 16

$sql->bindParam(':Patid',$Patid);

edit found the problem

$sql=$dbo->prepare
was supposed to $odb

joebuddah fucked around with this message at 01:30 on Jul 28, 2015

ManDingo
Jun 1, 2001
This has been annoying me forever. Why is it sometimes I write a query in SSMS, save the .sql. Then I come back say the following day and open up the query. All the tables and fields will have red squiggly and I'll get a ton of invalid object name errors. Then I put "USE [database name]" as the first line of the query. Now my query works again. The crazy thing is after it runs once I can remove the use statement and everything still works. What gives?

big trivia FAIL
May 9, 2003

"Jorge wants to be hardcore,
but his mom won't let him"

When you're connecting you're not specifying the catalog in the connection string, so it's defaulting (probably) to the master catalog.

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy
Whenever you connect to the server, your connection defaults to the "master" database and you have to either include a use statement to let your query know which database you want to talk to, or you need to just change it in the menu at the top:



To avoid doing either of these steps, you should get used to using 3 (or even 4 if you deal with multiple servers) part naming in your queries instead. So, if you have a server named Server1, and a database within that called DBMaster, and a table called Customers in the dbo schema within that database, after connecting to Server1 you could write:

code:
USE dbmaster 

SELECT * 
FROM   customers 
Or, you can write this instead:
code:
SELECT * 
FROM   dbmaster.dbo.customers 
You can use 4-part naming including the server name to join databases across servers (assuming they can see each other / are linked), like so:
code:
SELECT * 
FROM   server1.dbmaster.dbo.customers c1 
       INNER JOIN server2.dbmaster.dbo.customers c2 
               ON c1.id = c2.id 

ManDingo
Jun 1, 2001
Ah yes that makes perfect sense now. Thanks guys.

No Safe Word
Feb 26, 2005

Kumbamontu posted:

You can use 4-part naming including the server name to join databases across servers (assuming they can see each other / are linked), like so:

Just be aware that like 99% of the time this is the wrong way to do such a thing. Other than occasional one-off things.

kumba
Nov 8, 2003

I posted my food for USPOL Thanksgiving!

enjoy the ride

Lipstick Apathy

No Safe Word posted:

Just be aware that like 99% of the time this is the wrong way to do such a thing. Other than occasional one-off things.

This is the first I've heard this - how else can you accomplish the same thing? We have several reporting servers that do various things and we have nightly processes where batches of monitoring queries are run and we've never run into a problem with the servers talking to each other.

e: This post may sound snarky but it's not meant to, if we could do this in a better way I'm totally open to it!

kumba fucked around with this message at 17:20 on Jul 29, 2015

No Safe Word
Feb 26, 2005

Kumbamontu posted:

This is the first I've heard this - how else can you accomplish the same thing? We have several reporting servers that do various things and we have nightly processes where batches of monitoring queries are run and we've never run into a problem with the servers talking to each other.

e: This post may sound snarky but it's not meant to, if we could do this in a better way I'm totally open to it!

You do an export and import via things like SSIS. For batch stuff in off hours using linked servers isn't the worst abuse. Linked servers just introduce complexity that is generally unnecessary: permissions, network latency, schema inconsistencies, etc. Those are the sorts of things you have to manage on both ends whereas if you just have an export/import process you just have to manage that.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Any help with my partitioning question?

Or is this the wrong thread for this. Perhaps there is a DBA thread somewhere?


Agrikk posted:

Let's talk about Partitioning on SQL Server 2014:

Let's start by saying I've never worked with partitioning before, though I understand the concepts of horizontal and vertical partitioning (I think).

Right now I'm am looking at designing an OLAP database that will collect on the order of 20 million new records a day with a primary key of [timestamp, areaID]. Typically I work with large structured data sets by creating a filesystem for data files and another filesystem for indexes. I then create several files in each filesystem and create a dedicated volume on storage to serve each file, spreading the I/O load across a battery of volumes to maximize I/O.

However, in this deployment, I only have a single local logical RAID array to host the data. Past experience indicates that I can still receive a performance benefit by creating multiple volumes and putting a single data (or index) file in that volume. This way I can avoid fragmentation on disk as data files grow and can use multiple streams for reading and writing data.

But now I'm reading up on partitioning, and instead of creating a single file system for the data and a single file system for indexes, I can break this up into partitions on date, partitioned by year/month, or by AreaID.

Can anyone speak to the pros/cons of each scenario?

When partitioning by date, what happens when a new month arrives? Does the database have to be modified to add a new month partition and thus a new month partition data file? Or does it happen automatically?

Similarly, if a new AreaID is added, does the database have to be repartitioned or does it somehow know to create a partition for the new area?

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Kumbamontu posted:

Whenever you connect to the server, your connection defaults to the "master" database and you have to either include a use statement to let your query know which database you want to talk to, or you need to just change it in the menu at the top:



To avoid doing either of these steps, you should get used to using 3 (or even 4 if you deal with multiple servers) part naming in your queries instead. So, if you have a server named Server1, and a database within that called DBMaster, and a table called Customers in the dbo schema within that database, after connecting to Server1 you could write:

Actually, I think the initial database defaults to... the database selected as the default one in the server config. Not that the explicit DB name suggestion is incorrect in any way, but if 90% of the time everyone uses the same DB, is can be made the default one for ease of use.

Fiedler
Jun 29, 2002

I, for one, welcome our new mouse overlords.

Agrikk posted:

When partitioning by date, what happens when a new month arrives? Does the database have to be modified to add a new month partition and thus a new month partition data file? Or does it happen automatically?

Similarly, if a new AreaID is added, does the database have to be repartitioned or does it somehow know to create a partition for the new area?
SQL Server doesn't handle the partition management, but there are mechanisms you can use to automate partitioning. For example, see https://msdn.microsoft.com/library/aa964122(SQL.90).aspx

Here are some additional resources:
https://msdn.microsoft.com/en-us/library/dd578580.aspx
http://www.brentozar.com/sql/table-partitioning-resources/

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!
Using Postgres 9.3, I have a table called log with just under 10m rows defined like so:
pre:
    Column    |           Type           |                    Modifiers
--------------+--------------------------+--------------------------------------------------
 id           | integer                  | not null default nextval('log_id_seq'::regclass)
 log_level    | text                     | not null
 log_category | text                     |
 message      | text                     | not null
 username     | text                     |
 time_stamp   | timestamp with time zone | not null
 company_name | text                     |
 ip_address   | text                     |
Indexes:
    "log_pkey" PRIMARY KEY, btree (id)
    "log_company_name" btree (company_name)
    "log_time_stamp_idx" btree (time_stamp)
We want full-text, unanchored search on some of the columns, so I tried adding gist indices using pg_trgm like so:
create index log_search_message_idx on log using gist(message gist_trgm_ops);

This works great sometimes, but the results are wildly inconsistent. Here are two queries that return exactly the same set of 1664 rows:

pre:
explain(verbose,buffers,analyze) select *, count(*) over() as full_count from log where company_name = 'x' and (message ilike '%success%') order by time_stamp desc;

                                                                        QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=5073.78..5076.68 rows=1163 width=232) (actual time=84818.881..84819.108 rows=1664 loops=1)
   Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, (count(*) OVER (?))
   Sort Key: log.time_stamp
   Sort Method: quicksort  Memory: 491kB
   Buffers: shared hit=1 read=1072075 written=9
   ->  WindowAgg  (cost=529.57..5014.56 rows=1163 width=232) (actual time=84817.815..84818.243 rows=1664 loops=1)
         Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, count(*) OVER (?)
         Buffers: shared hit=1 read=1072075 written=9
         ->  Bitmap Heap Scan on log  (cost=529.57..5000.02 rows=1163 width=232) (actual time=84744.119..84816.478 rows=1664 loops=1)
               Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address
               Recheck Cond: (log.message ~~* '%success%'::text)
               Filter: (log.company_name = 'x'::text)
               Buffers: shared hit=1 read=1072075 written=9
               ->  Bitmap Index Scan on log_search_message_idx  (cost=0.00..529.28 rows=1165 width=0) (actual time=84743.042..84743.042 rows=1664 loops=1)
                     Index Cond: (log.message ~~* '%success%'::text)
                     Buffers: shared hit=1 read=1071286 written=9
 Total runtime: 84819.782 ms
pre:
 explain (verbose,buffers,analyze) select *, count(*) over() as full_count from log where company_name = 'x' and (message ilike '%cessful%') order by time_stamp desc;

                                                                     QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Sort  (cost=5073.78..5076.68 rows=1163 width=232) (actual time=87.739..87.957 rows=1664 loops=1)
   Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, (count(*) OVER (?))
   Sort Key: log.time_stamp
   Sort Method: quicksort  Memory: 491kB
   Buffers: shared hit=5996 read=553
   ->  WindowAgg  (cost=529.57..5014.56 rows=1163 width=232) (actual time=86.738..87.182 rows=1664 loops=1)
         Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address, count(*) OVER (?)
         Buffers: shared hit=5996 read=553
         ->  Bitmap Heap Scan on e911manager.log  (cost=529.57..5000.02 rows=1163 width=232) (actual time=84.324..86.114 rows=1664 loops=1)
               Output: id, log_level, log_category, message, username, time_stamp, company_name, ip_address
               Recheck Cond: (log.message ~~* '%cessful%'::text)
               Filter: (log.company_name = 'x'::text)
               Buffers: shared hit=5996 read=553
               ->  Bitmap Index Scan on log_search_message_idx  (cost=0.00..529.28 rows=1165 width=0) (actual time=84.227..84.227 rows=1664 loops=1)
                     Index Cond: (log.message ~~* '%cessful%'::text)
                     Buffers: shared hit=5207 read=553
 Total runtime: 88.189 ms
As you can see, the times are vastly different. Caching isn't the reason, because if I rerun the first query it's just as slow. Also, this is running on a test machine with basically zero other usage, so I imagine it could only get worse if the machine was under load.

I don't understand how the index could work so well sometimes and so poorly others. Is there something I'm doing wrong here?

Pardot
Jul 25, 2001




Kilson posted:

Caching isn't the reason,

It is the reason. Look at how much higher Buffers: shared hit is on the fast query.

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!

Pardot posted:

It is the reason. Look at how much higher Buffers: shared hit is on the fast query.

What I meant was that if I do the same slow query twice in a row, it's still slow. If caching made the difference, it should work better when performing the same query as opposed to a different one. Caching isn't making the second query 1000x faster. I can completely rebuild the table and/or indices and get the same results as above for the two queries, regardless of the order I do them.

Pardot
Jul 25, 2001




Kilson posted:

What I meant was that if I do the same slow query twice in a row, it's still slow. If caching made the difference, it should work better when performing the same query as opposed to a different one. Caching isn't making the second query 1000x faster. I can completely rebuild the table and/or indices and get the same results as above for the two queries, regardless of the order I do them.

Oh I understand what you were saying now. If you know it's all just english, you might want to try making the column have the "C" locale, which speeds up comparisons, and also check out the trigram extension which you can create gin and/or gist indexes that work with ilike http://www.postgresql.org/docs/9.1/static/pgtrgm.html

quote:

Beginning in PostgreSQL 9.1, these index types also support index searches for LIKE and ILIKE, for example

SELECT * FROM test_trgm WHERE t LIKE '%foo%bar';
The index search works by extracting trigrams from the search string and then looking these up in the index. The more trigrams in the search string, the more effective the index search is. Unlike B-tree based searches, the search string need not be left-anchored.

The choice between GiST and GIN indexing depends on the relative performance characteristics of GiST and GIN, which are discussed elsewhere. As a rule of thumb, a GIN index is faster to search than a GiST index, but slower to build or update; so GIN is better suited for static data and GiST for often-updated data.

Kilson
Jan 16, 2003

I EAT LITTLE CHILDREN FOR BREAKFAST !!11!!1!!!!111!

Pardot posted:

Oh I understand what you were saying now. If you know it's all just english, you might want to try making the column have the "C" locale, which speeds up comparisons, and also check out the trigram extension which you can create gin and/or gist indexes that work with ilike http://www.postgresql.org/docs/9.1/static/pgtrgm.html

I'm using gist trigram indices. That's why I don't understand how one query is so much slower than another.

Moreau
Jul 26, 2009

This may be the wrong thread, in which case I apologize.

I have a SQL Server in Perth (SQL2012) on one domain, and I have a server (SQL2008R2) in Melbourne on a completely different domain, different company. I need to read some information from a database hosted on the Melbourne server.

Currently, I have Windows authentication detail for both servers, and SQL accounts for both sides. I can, from my Perth server, access a remote ip in Windows explorer and see the Melbourne machine.

My question is, how do I get the sql servers to talk, when everything else can?

Dalmuti
Apr 8, 2007
Set em up as linked servers. Pretty sure you can plug in whatever domain credentials you want

Revalis Enai
Apr 21, 2003
<img src="https://fi.somethingawful.com/customtitles/title-revalis_enai.gif"><br>Wait, what's my phone number again?
Fun Shoe
Not a small question, but this is the only thread on SQL I can find.

My employer currently uses google sheet to keep track of product prices, inventory and ordering. The sheets have between 200-700 items and are accessed from two other locations. I have spent last 3-4 Months self-learning and writing scripts to automate some tasks, but they break whenever someone accidentally modify the columns.

The boss also wants to be able to automatically update(from excel or pdf) and compare prices. At this point I think I'm hitting the limit of what google sheet is capable of and need to move to SQL.

My knowledge in databases is limited to having a course on relational database back in college. I"m currently messing around with XAMPP, but I think there's no way to avoid having to program a custom interface, as most of the users are older people with limited technical knowledge. My programming skill is limited, so I need to commit to a language to use to make the interface. I'm leaning towards php so I don't have to teach other locations how to install a software or run a program.

Will I be able to create what my boss want with MYSQL and PHP? If so, what's a good way to start learning the two languages from scratch?

kloa
Feb 14, 2007


If they know Excel, hookup the data source to a SQL backend, and have the interface through that.

I wouldn't build a php front end unless you want to learn how to (and maintain it as long as you work there).

PhantomOfTheCopier
Aug 13, 2008

Pikabooze!
Oh look, PostgreSQL has a foreign data wrapper for that too... https://github.com/lincolnturner/gspreadsheet_fdw

Revalis Enai
Apr 21, 2003
<img src="https://fi.somethingawful.com/customtitles/title-revalis_enai.gif"><br>Wait, what's my phone number again?
Fun Shoe

kloa posted:

If they know Excel, hookup the data source to a SQL backend, and have the interface through that.

I wouldn't build a php front end unless you want to learn how to (and maintain it as long as you work there).

I'll explore that option since I know all of our machines have Excel. Would this method allow multiple users to access and change values on the same table like google sheet?

Adbot
ADBOT LOVES YOU

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





Postgres 9.4

I've got three tables like so:

product:
id | description

tag:

id | product | text

attributes:

id | product | key | value


product in both tag and attributes references id in product. All other fields are text. I've got a view that joins the tables and results in the following table. Note that for each product I get rows equal to the number of tags times the number of key/value pairs.

product.id | product.description | tag.text | ARRAY[attributes.key, attributes.value]

Can I aggregate both the tags and k/v pairs (into arrays) with a single SELECT/GROUP BY statement or do I need to first do one and then the other building an intermediate table along the way?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply