|
I am building my first PHP-based AWS DynamoDB application and I am having problems connecting to an existing table. I'm not sure if this is a AWS SDK issue or a PHP issue. When I run the PHP script I get the following error(s):code:
|
# ? Mar 28, 2022 23:31 |
|
|
# ? Jun 8, 2024 08:17 |
|
(Shooting from the hip here, not Windows-knowledgable. You may have already found these resources.) Seems to be an issue with the cURL library in PHP, not the AWS SDK (similar issue.) I don't think you need to worry about setting a value for CApath. Based on the libcurl error page, I'd check if PHP has permissions to read cacert.pem. Beyond that, another similar error was fixed by updating curl/openssl entries in php.ini to reflect the location of cacert.pem.
|
# ? Mar 29, 2022 00:16 |
|
I've been getting some help in the WordPress thread with Gravity Forms but I feel bad bugging one person about it. Anyways, i'm trying to programatically reset the cost of some items to $0, and this code kinda works. Trouble is that I can't seem to get it to actually return the value out of the foreach loop. I feel like i'm missing something obvious. The price is correctly updated in the loop but outside it doesn't seem to want to update. Wondering if anyone can help. Thanks!!code:
|
# ? Apr 1, 2022 01:45 |
|
That's a scope issue. $product only exists inside the loop. You'll want to change $product_info['products'][$key]['price']
|
# ? Apr 1, 2022 02:07 |
|
You're a genious omg you just saved me hours.
|
# ? Apr 1, 2022 02:19 |
|
e; Figured it out! It was a error with the if then statement :v
aperfectcirclefan fucked around with this message at 21:47 on Apr 9, 2022 |
# ? Apr 9, 2022 21:23 |
|
An older workplace contacted me that made me aware of a download abuse issue with php Content-Disposition: attachment and readfile. It seems that my problem is with readfile, because although this script works, whether or not the client closes their browser, readfile reads the entire contents of the mp4, setting up the possibility of abuse with scripts initiating the download and immediately closing the progress. Something, somewhere, is running a script which clicks this link hundreds of times per second, running my php script and immediately cancelling their download, but my server is preparing that entire file to be offloaded each time. Here's the script I'm running, when the user/abuser clicks a download link: code:
|
# ? Apr 21, 2022 17:48 |
Don't use PHP to send the file. Or reinvent the wheel and implement a chunked reading and transfer where you continually check connection_aborted() as well as handle HTTP range requests in full.
|
|
# ? Apr 21, 2022 18:10 |
|
From the docs:quote:readfile() will not present any memory issues, even when sending large files, on its own. If you encounter an out of memory error ensure that output buffering is off with ob_get_level(). You can also install xsendfile and make it the webserver's problem to serve the file. You will need to do some header stuff as well, I don't know the details, I use a framework to do it. There's probably other ways to hand off the file serving to the server as well.
|
# ? Apr 21, 2022 18:14 |
|
Thanks a lot for the suggestions. I worked on this over a decade ago, and there were some other sketchy things going on. Like, the user could send whatever address they wanted, even the server configuration if they knew what it was. This is what I have now. How does it look?code:
|
# ? Apr 23, 2022 20:58 |
Not better at all. You need to be checking the connection_aborted() function whether the client has disconnected and then stop sending, otherwise you are definitely wasting server resources sending something to nowhere. Sending content type application/octet-stream is a good way to ensure that browsers will almost certainly not use the video in a <video> element. Send the right MIME type for the data unless you explicitly want it to only be a file download. If the goal is to let browsers play the video you really should also implement range requests as I suggested above. That can allow seeking in the video without downloading the full thing. I would also suggest using a safer identifier for the data to send than just a filename. Have a database or something to look up that the requested resource is on the approved list, and optionally do other permission checks for the specific client/user. Otherwise you're far better off just serving the files directly via your web server instead of wrapping the download in a script. Edit: ... I just noticed. You're taking the video file from a web URL and re-sending it? Is that on the same server, or on somewhere else? Just to be sure, you do realize that what you're doing there is tell PHP to open a HTTP connection as a client to the URL you construct there, download the data at that URL, and then re-send the just downloaded data to the client that script is serving, right? nielsm fucked around with this message at 22:12 on Apr 23, 2022 |
|
# ? Apr 23, 2022 22:07 |
|
nielsm posted:Not better at all. Thanks... This is a former boss contacting me asking me if I can fix it. Guess not! I have not done this stuff in a really long time. My impression was that unless I have ignore_user_abort(true), PHP should see that connection was aborted and discontinue executing this download; at least modern versions of PHP, which I know this was updated to 7.4.29. And I thought readfile() was the exception to that. Those seem like all good suggestions, but kind of out of the scope for a quick fix from me, unless they want to pay $$$. It's really good to know though, so thank you. That last bit you mentioned in your edit just makes me want to serve these videos on the same hosting. I have no idea why they have a separate download subdomain with other hosting. This should have plenty of space to host it.
|
# ? Apr 24, 2022 00:18 |
|
Good Sphere posted:
I'm going to ignore the rest of the posts in this thread just to point out my_script.php?MP4Address=../../../../sensitive_file_you_really_dont_want_sent. If this is what the original actually looked like, and you are not currently responsible for it: this is a trash fire, do not touch it and walk away.
|
# ? Apr 24, 2022 01:46 |
|
If I have a nested PHP array like so:code:
|
# ? May 13, 2022 16:07 |
|
If these are unsorted, linear search is probably the way to go. If the length isn't horribly long, the time probably won't matter. If you want to rework this to make subsequent lookups fast, iterating the whole set to massage it into a [host][target] lookup should make read speed pretty good after building the index:code:
|
# ? May 13, 2022 16:21 |
|
Thanks for this! This was the actual code I wrote:code:
|
# ? May 13, 2022 21:23 |
|
New question: How can I programmatically pull the public IP address of a host behind a NAT gateway via PHP? Is there a technique people use?
|
# ? May 20, 2022 16:36 |
|
The public IP would just be in $_SERVER['REMOTE_ADDR'], no? The private IP would be more of a challenge.
|
# ? May 20, 2022 19:11 |
|
Depending on the network setup, the private source IP might be in X-Forwarded-For.
|
# ? May 20, 2022 19:25 |
|
duz posted:Depending on the network setup, the private source IP might be in X-Forwarded-For. Ideally, but requires extra config and can get super messy in my own setups like nginx serving behind haproxy splitting off my websockets and so has to get passed along multiple times or even per config block. I skimmed a bit earlier and didn't really see a situation where one would have to go this far but now that I see one: $ip = file_get_contents("https://ipecho.net/plain"); Not ideal to poke something external or a bazillion times a second, but that'll do for little stuff?
|
# ? May 20, 2022 21:55 |
|
Does anyone know the reasoning behind losing an install of XAMPP VM, along with all the files you put in htdocs? edit: I reinstalled XAMPP VM, and the files are back! I wonder how they were stored in the first place. A search did not find it initially. Good Sphere fucked around with this message at 16:27 on Jun 4, 2022 |
# ? Jun 4, 2022 16:11 |
|
Agrikk posted:New question: I ended up using the AWS SDK to query the NAT gateway itself for the IP address of the public interface.
|
# ? Jun 9, 2022 05:47 |
|
What's the best way to tackle this problem? I have several hundred thousand json documents each with the format roughly: code:
[item_id],[name],[description],[group_id],[market_group],[packaged_volume],[portion_size],[radius],[type_id],[volume],[attribute1_id],[attribute1_value],[attribute2_id],[attribute2_value],...,[attribute68_id],[attribute68_value] I've extracted the json into a multidimensional array and am able to extract the top level elements. My problem is how to tackle the variable number of attribute pairs. I could easily copy/paste 68 if_then statements to check for the existence of each pair, but that seems crude and inelegant. I can loop over $json['attributes'] to extract each attribute pair, but I don't know how to write the loop to stuff the data into the resulting variable so I can write the INSERT query. I can do a count of $json['attributes'], halve it to get the number of attribute pairs, and what I'm thinking is create a variable variable so that code:
|
# ? Jul 6, 2022 01:28 |
|
That database schema doesn't seem particularly useful for anything, I have to say. If you made an actual relational schema (a separate attribute table with item_id, attribute_id, value fields), then it would be really easy to insert things correctly, and much easier to write queries based on the attributes. Alternatively, if you're just trying to store this information and aren't interested in querying based on the values in it, storing all the attributes in a single JSON field will be much easier to work with.
|
# ? Jul 6, 2022 01:58 |
|
That's a hideous table structure. Really bad. Anyway here's a rough idea of how to do what you want, assuming that each of your 136 attribute columns default to null and don't need to be specified unless they have a value. Good luck! php:<?php foreach ($json_documents as $document) { $row = [ 'item_id' => $document['item_id'], 'name' => $document['name'], // set other top-level elements ]; $attr_count = 1; foreach ($document['attributes'] as $attr) { $row["attribute{$attr_count}_id"] = $attr['attribute_id']; $row["attribute{$attr_count}_value"] = $attr['value']; $attr_count++; } do_insert($row); }
|
# ? Jul 6, 2022 02:25 |
|
Tiny Bug Child posted:That's a hideous table structure. Really bad. While appreciate your code and thanks, I am more appreciative of this statement. For the last, I dunno, gazillion years I've been operating under the premise that OLAP data warehouses should be de-normalized to speed up reads. I hit the internet to look for all of my evidence that this table structure was the way to go and imagine my surprise when I ran into site after site, post after post debunking this myth and now I am embarrassed for having done all the denormalization stuff along the way. "One Big Row" has been my mantra for years. Ooops! I know the database thread is thattaway but I appreciate you calling this out. Strangely, every time by table design gets called into question here I end up redesigning it and the problem I originally posted about goes away for a simple and elegant solution that just feels "righter".
|
# ? Jul 6, 2022 05:20 |
Jabor posted:Alternatively, if you're just trying to store this information and aren't interested in querying based on the values in it, storing all the attributes in a single JSON field will be much easier to work with. Going to second this. JSON fields can be useful when you mostly just need to dump some data for storage. You can still query on them, it's just horrible and slow.
|
|
# ? Jul 6, 2022 05:54 |
|
MariaDB dynamic columns are cool for that too, and it's weirdly likely that you're running MariaDB and don't even know it. They have some special crud to run like a key-value store in a single column and can be easily spat out as JSON. It really is just JSON in a blob with native crud though and separate associative tables tend to be way better in all cases. A follow up: Not to be confused with JSON columns which are just a syntax check on insert/update. Ranzear fucked around with this message at 00:29 on Jul 7, 2022 |
# ? Jul 6, 2022 23:44 |
|
Running macOS 12.1, with Xammp running, so that I can run php locally. Can I open a file with its default application with shell_exec? edit: This works in Terminal: open -a "myAppName" file.txt This does nothing in php: code:
Good Sphere fucked around with this message at 20:45 on Jul 7, 2022 |
# ? Jul 7, 2022 20:02 |
|
The shell_exec() method returns the output of the command, so you could see what is being output:php:<?php echo(shell_exec("open -a 'myAppName' file.txt")); Also, if you're not escaping the double quotes or using single quotes like in my example you might have a plain parsing error.
|
# ? Jul 7, 2022 21:51 |
|
musclecoder posted:The shell_exec() method returns the output of the command, so you could see what is being output: Thanks for the info. I tried your suggestions. Used echo(shell_exec("open -a 'myAppName' file.txt"));, also with full path, and even without specifying the app name. Maybe it's how things are configured on the php end. Since you're curious, here's a some background to why I want to do this. I have javascript that calls a certain numbered function that renders graphics frame by frame. Those images get saved with an ajax call to php. I string the images into a video using ffmpeg via command. First, it would be nice to be able to start making that video right away when the images are all rendered with that ffmpeg command. That's not too much of a problem though. The real reason is that I want an interface where I can make new files; one for each image generation function. These functions are kind of containers for little programs that make these animation frames. So if I want to edit one, I could just click in a list and hit open. There's more to it than that too. They are just function names with arbitrary numbers. I could keep it that way, but in a sql database, I could have different things like titles for the functions (which I could change later without having the program affected), and descriptions. Good Sphere fucked around with this message at 01:56 on Jul 8, 2022 |
# ? Jul 8, 2022 01:38 |
|
Good Sphere posted:Thanks for the info. I tried your suggestions. Used echo(shell_exec("open -a 'myAppName' file.txt"));, also with full path, and even without specifying the app name. Maybe it's how things are configured on the php end. In the php.ini search for disable_functions and make sure shell_exec is not listed there. If it is, remove it, save and restart XAMPP. ModeSix fucked around with this message at 01:21 on Jul 16, 2022 |
# ? Jul 16, 2022 01:17 |
|
you can also pare it back a bit and test if it works at all. for example, what do you get if you run var_dump(shell_exec("echo test")); ?
|
# ? Jul 16, 2022 01:36 |
|
I am working on a pet project to import an AWS Cost and Usage Report into a database, just because. The problem I'm facing is that month-to-month the columns in the report change as services are used or not used. There is a data dictionary that reveals about two hundred columns that might potentially show up in a report, but most of my reports have about 190-195. This produced report is not normalized and is very frustrating to use in this native form. (Yes, there are a gangload of ways to access this data through Cost Explorer, or Quicksight or Athena but I'm doing it this way to expand my PHP chops) What I've done so far is read the header of the AWS-produced CSV file and use that to populate the column names of the postgres INSERT INTO(...) VALUES(...) command and then loop over the subsequent lines in the CSV file to populate the rest of the $query string. My current issue is that some fields are numeric, some are some are text strings including escape-able characters (comma or single- or double-quotes, etc), and I can't figure out how to escape the special characters as well as enclose text strings in quotes so the query doesn't barf. I've tried using implode but that wraps the whole string in double quotes, and am using code:
I've also tried pg_escape_string but am using PDO for my connection type and keep bumping into "pg_escape_string(): Argument #1 ($connection) must be of type PgSql\Connection, PDO given" I think that I'm on the right track with implode but I can't wrap my head around it. Agrikk fucked around with this message at 16:26 on Aug 1, 2022 |
# ? Aug 1, 2022 16:20 |
|
Are you able to use a newer version of Postgres? If so, you may find it easier to just convert the data into a big JSON blob and query that directly. Alternatively, this may be a time where an entity-attribute-value structure would work better than attempting to map the headers of the CSV file to column headers. Finally, could you have some deterministic function that sanitizes the names of the columns so they don't contain anything other than [a-z0-9\_]? That way something like resourceTags/user:purpose would be converted to resource_tags_user_purpose for example?
|
# ? Aug 1, 2022 21:14 |
|
Agrikk posted:am using PDO for my connection type Might be a 'duh' question but you're using statement prep, not just query(), right? Agrikk posted:populate the rest of the $query string. I'm going to guess not. This is what statement prep is for, and PDO will handle all of the escaping as long as you have the right driver by creating your handle like $dbh = new PDO('pgsql:host=...;dbname=...', $user, $pass); With statement prep, you can just make a query with question mark params and feed it a sparse array with column keys and the value is just the value, all formatting and escaping and a bunch of security stuff not applicable here is totally automatic. It shouldn't require any imploding of keys to columns or anything, there's a way to make it work from the raw array but it's been a while for me to remember exactly. PDO isn't the connection type, it's the entire abstraction layer for several database formats. You may also want to PDO::getAvailableDrivers() just to double check you have postgres available. Proper edit: With sparse arrays you're gonna need the columns. You'll have to implode the keys into a list of columns to match the array being inserted after all, I think I just had a slick way of doing it at one point. Something like implode(array_keys($rowmap), ", ")) directly in the statement but then use parameter binding for the values. if the columns don't match the array's keys it'll complain. You could just switch the PDO driver to pgsql and use a PDO equivalent of that escaping function you found, but this is the more proper and simpler way unless I've missed something. Ranzear fucked around with this message at 00:31 on Aug 2, 2022 |
# ? Aug 1, 2022 23:53 |
|
musclecoder posted:Are you able to use a newer version of Postgres? If so, you may find it easier to just convert the data into a big JSON blob and query that directly. I'm doing this now. My first step is to extract the headers from the CSV and sanitize them so they match previously sanitized headings in the table. It's working great so that's not the problem. The problem is the data. Text strings aren't quoted in the CSV so any comma in a string is breaking the data layout with a complaint about extra data at end of line. Ranzear posted:Might be a 'duh' question but you're using statement prep, not just query(), right? I think I am. I'm pretty sure I am. I'm defining the connection string like this: code:
code:
php:<? $query = "INSERT INTO awsbilling.test ( "; for ($x = 0; $x <= $headerCount; $x++) { $query .= "$header[$x], "; } ELSE { $query .= "$header[$x] "; } } $query .= ") VALUES( "; for ($x = 0; $x <= $headerCount; $x++) { $query .=pg_escape_string($conn, $csv[$index][$x]); if ($x<$headerCount) { $query .=","; } } $query .= "); "; print_r ($query); ?> quote:Proper edit: With sparse arrays you're gonna need the columns. You'll have to implode the keys into a list of columns to match the array being inserted after all, I think I just had a slick way of doing it at one point. Something like implode(array_keys($rowmap), ", ")) directly in the statement but then use parameter binding for the values. if the columns don't match the array's keys it'll complain. And here's the rub. implode is what I am trying to do, but I can't figure out the syntax to get it right. Agrikk fucked around with this message at 16:27 on Aug 2, 2022 |
# ? Aug 2, 2022 15:16 |
|
Nope. You're doing just a raw text query and all the hard parts manually. Look at statement prep. The only string handling should be that matching column set thing I mentioned.
|
# ? Aug 2, 2022 18:00 |
|
This made a huge difference. I'm now creating an array of headers and then building the prepared statement from that. Then I'm using array_combine to map vales to column headers in a $params array: php:<? $query = "INSERT INTO awsbilling.costusagereport_temp ( "; for ($x = 0; $x <= $headerCount; $x++) { if ($x<$headerCount) { $query .= "$header[$x], "; } ELSE { $query .= "$header[$x] "; } } $query = $query." ) VALUES ("; for ($x = 0; $x <= $headerCount; $x++) { if ($x<$headerCount) { $query .= ":$header[$x], "; } ELSE { $query .= ":$header[$x] "; } } $query = $query.")"; print_r ($query); echo "\n"; $result = $conn->prepare($query); $params = array_combine($header,$csv[$index]); $result->execute($params); ?> Except now I'm faced with: code:
|
# ? Aug 2, 2022 22:43 |
|
|
# ? Jun 8, 2024 08:17 |
|
Agrikk posted:And here's the rub. implode is what I am trying to do, but I can't figure out the syntax to get it right. You probably want something along the lines of: php:<?php // Assuming $values has already been sanitized. But don't count on that. $query = "INSERT INTO table_name (" . implode(',', $columns) . ") VALUES ('" . implode("', '", $values) . "')"; php:<?php $values = [ 'column_a' => 'some "strange" value', 'column_b' => '10', 'column_c' => 'another \'strange\' value' ]; $query = "INSERT INTO table_name (" . implode(',', array_keys($values)) . ") VALUES (" . implode(',', array_fill(0, count($values), '?')) . ")"; // Connect using PDO $pdo = new PDO(/* DSN */); $stmt = $pdo->prepare($query); $stmt->execute(array_values($values)); Edit: Regarding the types error you're getting - I would recommend making all of the fields nullable text values - it looks like you're trying to insert a null or non-numeric value into a double or float type field. musclecoder fucked around with this message at 22:54 on Aug 2, 2022 |
# ? Aug 2, 2022 22:51 |