|
ultrafilter posted:The "I'm feeling lucky" button can only return results that actually exist. Generative models can produce new content which may or may not be factual. The "I'm feeling lucky" button often returns results containing content that is not factual.
|
# ? Mar 31, 2023 23:13 |
|
|
# ? May 11, 2024 15:17 |
|
Im going to use ChatGPT for any hackerrank interview challenge.
|
# ? Mar 31, 2023 23:35 |
|
These assertions are close to correct but also dangerously wrong. LLMs like GPT-4 are perfectly capable of stepping through and interpreting code. It's wrong to think of them as intelligent, even artificially intelligent, but the "stochastic bullshitting" is also an oversimplification
|
# ? Apr 1, 2023 01:51 |
|
Vulture Culture posted:LLMs like GPT-4 are perfectly capable of stepping through and interpreting code. No. They can parrot a facsimile of stepping through code because people have stepped through code on the internet before and the LLM can mimic the form of that. It is not actually reasoning about what the underlying state of the program is and how each line changes that state.
|
# ? Apr 1, 2023 10:58 |
|
speaking of the “leaving sounds like a great decision” title, what’s the best and quickest way to find a lawyer to review a severance agreement?
|
# ? Apr 1, 2023 19:00 |
|
Bongo Bill posted:ChatGPT can paraphrase existing content, including solutions to problems which have already been solved often. That has actual utility as a programming aid, just as long as you don't misunderstand what it's actually doing and assume it can program. They need to fix the factual inaccuracies. I was seeing if it could generate cryptic crossword clues and it does but they're based massive factual inaccuracies. If you call it out it says oh yes my bad and spouts another load of poo poo.
|
# ? Apr 1, 2023 20:01 |
|
I pasted a few react components to chatGPT and told it to generate some reacting testing library tests for them for me. What it generated was 80% of the way there. I had to fix some busted imports, remove a few cases that weren't necessary, clean up some mocking and add some cases, but it was still better than not having it. I hesitate to recommend it to my coworkers or others in this codebase though, because a lot of them already don't know what they're doing and would just take its results and commit them as long as the build didn't fail. Lacking trust in my coworkers judgment isn't chatGPT's fault though
|
# ? Apr 1, 2023 21:14 |
|
Aramoro posted:They need to fix the factual inaccuracies. I was seeing if it could generate cryptic crossword clues and it does but they're based massive factual inaccuracies. If you call it out it says oh yes my bad and spouts another load of poo poo. That's not possible because the fundamental underlying mechanism by which it operates is just finding the most likely next token in the prompt. Plus its training corpus is chock full of factual inaccuracies. But fundamentally either it spits out a 1:1 copy of text it was trained on or it makes some poo poo up. Making some poo poo up is how the entire thing works.
|
# ? Apr 1, 2023 21:23 |
|
March 8:Nybble posted:^^ Levels.FYI is mandatory viewing every couple of months, even after being at a company for a few years. April 1: Nybble posted:speaking of the “leaving sounds like a great decision” title, what’s the best and quickest way to find a lawyer to review a severance agreement? That was pretty fast.
|
# ? Apr 2, 2023 03:44 |
|
lol, lmao. I don’t know if I would have done anything differently at offer time, but all the red flags that popped up soon after turned out to be right. At least the 4 month severance and signing bonus I don’t have to pay back (1.5 year earn out) means I’m gonna be fine. Practically a year’s salary in 6 months, with the downsides of having to interview again. Honestly if I could make insurance happen for my family I’d probably sit out of the market for a bit longer and wait out all this chaos.
|
# ? Apr 2, 2023 04:32 |
|
|
# ? Apr 2, 2023 06:33 |
|
Yesterday I used ChatGPT to write a simple class that would take a query like "my search query from:2015-01-01 userId:abc" and turn it into an object. It gave me exactly what I wanted and wrote unit tests for me. For simple use cases like that it's pretty great, it saved me probably 20-40 minutes.
|
# ? Apr 2, 2023 06:54 |
|
prom candy posted:Yesterday I used ChatGPT to write a simple class that would take a query like "my search query from:2015-01-01 userId:abc" and turn it into an object. It gave me exactly what I wanted and wrote unit tests for me. For simple use cases like that it's pretty great, it saved me probably 20-40 minutes. If you don’t mind, what was the text you used to generate what you wanted? How specific did you have to be? At what point does writing exactly what you want in paragraph form take longer than just coding it?
|
# ? Apr 2, 2023 15:06 |
|
epswing posted:If you don’t mind, what was the text you used to generate what you wanted? How specific did you have to be? At what point does writing exactly what you want in paragraph form take longer than just coding it? First message: quote:Write a simple ruby class for parsing search terms like "my query from:2020-01-05 userId:abc123" into hashes. It should be able to take a list of the allowed keys and not transform any keys that don't match. Second message: quote:Can you make it handle a query like "some query from:2020-01-05 userId:abc123 some further query" where the result is {:from=>"2020-01-05", :userId=>"abc123", :"query"=>"some query some further query"} Third message: quote:Can you write some RSpec specs for this? I don't think there's any way I could have coded it out and written unit tests faster than this.
|
# ? Apr 2, 2023 15:20 |
|
That's a similar approach I usually take. I'll ask clarifying questions, and paste error messages I encounter and ask how it would change its approach due to whatever runtime or buildtime error is encountered
|
# ? Apr 2, 2023 16:34 |
|
I use it for a home project of gamedev and this was a recent exchange (my side only)quote:Write a Class for Unity that displays a toast on the screen for a few seconds then fade out. It should have a queue of messages and only display one at a time. If a user taps it, it should dismiss it before the fade out time. quote:In the ToastManager class, I didn't mean a tap on the screen, I meant that the user had to click on the toast itself to dismiss it. Update FadeOutToast for that. quote:I'm using the new input system, update for that instead quote:I don't like that, modify it to use IPointerClickHandler instead quote:Update FadeOutToast to use DoTween instead of Lerp At which point I pasted the class as written into my code base. You definitely had to know what you wanted, and could have gotten there on your own, but: A) It wrote working code. B) Got there in less time even just from a typing perspective than I would have otherwise. The 'nice' part to me was that 2 of my clarifying questions were around "Use this API, not that API"
|
# ? Apr 3, 2023 04:21 |
|
Not directing this at anyone in particular but to me it's interesting that people are hyped about turning some of their tasks from "develop" to "code review". Personally I like programming way more, and approach code reviews like I approach taking medicine and going to the gym: a necessary evil. Maybe the fact that it's a soulless computer receiving the "reviews" makes the process less mentally taxing.
|
# ? Apr 3, 2023 17:08 |
|
Large Language Models and Simple, Stupid Bugsquote:With the advent of powerful neural language models, AI-based systems to assist developers in coding tasks are becoming widely available; Copilot is one such system. Copilot uses Codex, a large language model (LLM), to complete code conditioned on a preceding "prompt". Codex, however, is trained on public GitHub repositories, viz., on code that may include bugs and vulnerabilities. Previous studies [1], [2] show Codex reproduces vulnerabilities seen in training. In this study, we examine how prone Codex is to generate an interesting bug category, single statement bugs, commonly referred to as simple, stupid bugs or SStuBs in the MSR community. We find that Codex and similar LLMs do help avoid some SStuBs, but do produce known, verbatim SStuBs as much as 2x as likely than known, verbatim correct code. We explore the consequences of the Codex generated SStuBs and propose avoidance strategies that suggest the possibility of reducing the production of known, verbatim SStubs, and increase the possibility of producing known, verbatim fixes.
|
# ? Apr 3, 2023 18:24 |
|
Itaipava posted:Not directing this at anyone in particular but to me it's interesting that people are hyped about turning some of their tasks from "develop" to "code review". Personally I like programming way more, and approach code reviews like I approach taking medicine and going to the gym: a necessary evil. The part of development that's most interesting to me is figuring out how I'm going to solve the problem or make a bunch of pieces fit together. So far LLMs haven't taken that away from me. Like the fun part of developing that SearchParser class was figuring out various inputs and what kind of object they should output. Writing the actual string manipulation code to make it happen isn't all that exciting, especially for something pretty simple.
|
# ? Apr 3, 2023 19:25 |
|
Itaipava posted:Not directing this at anyone in particular but to me it's interesting that people are hyped about turning some of their tasks from "develop" to "code review". Personally I like programming way more, and approach code reviews like I approach taking medicine and going to the gym: a necessary evil. Yeah I don't understand why people seem to think that debugging was the most interesting part of their jobs
|
# ? Apr 3, 2023 22:06 |
|
HOWEVER, if you use AI to get the task done in twenty minutes and then touch fish for the rest of the day you have my support & blessing
|
# ? Apr 3, 2023 22:16 |
|
Cup Runneth Over posted:Yeah I don't understand why people seem to think that debugging was the most interesting part of their jobs It was certainly the most interesting part in a "may you live in interesting times" way, at least.
|
# ? Apr 3, 2023 22:37 |
|
Debugging is like a murder mystery where you are the detective, the killer, and the victim.
|
# ? Apr 3, 2023 23:32 |
|
Itaipava posted:Not directing this at anyone in particular but to me it's interesting that people are hyped about turning some of their tasks from "develop" to "code review". Personally I like programming way more, and approach code reviews like I approach taking medicine and going to the gym: a necessary evil. i get to offload a bunch of tedium and produce higher quality code. i don't have to talk to my annoying coworkers with it either. win-win. the time i'd spent futzing with yet another unit or integation test i can instead spend picking my nose in a booked conference room while i scroll around on my phone
|
# ? Apr 3, 2023 23:41 |
|
ChatGPT has been great for handling dumb single purpose language stuff like regex or GitHub action YaML, and I look forward to using it for AWS permissions. I plan to use it to help the can't-really-program folks at my company to get them up to speed more quickly.
|
# ? Apr 4, 2023 01:14 |
|
Bongo Bill posted:Debugging is like a murder mystery where you are the detective, the killer, and the victim. Sometimes you discover the bug is from the framework you use or the OS, which makes it a bit more Sherlock Holmes since you're left with a very improbable truth.
|
# ? Apr 4, 2023 03:16 |
|
Am I misunderstanding how people are asking ChatGPT to write tests? If you give it the code and ask it to write tests then you aren't actually testing your code as it will match any mistakes with incorrect tests. I guess in theory you can review the test code, but at least personally I'm way less likely to catch mistakes in a review as opposed to writing test cases where I know what should happen and have to write that out.
|
# ? Apr 4, 2023 04:14 |
|
asur posted:Am I misunderstanding how people are asking ChatGPT to write tests? If you give it the code and ask it to write tests then you aren't actually testing your code as it will match any mistakes with incorrect tests. I guess in theory you can review the test code, but at least personally I'm way less likely to catch mistakes in a review as opposed to writing test cases where I know what should happen and have to write that out. sometimes ill paste single functions to it, other times i'll enumerate the cases that need to be tested, the code, paste them to chatgpt, and basically review the results and make tweaks. im still writing some but it's the trickier bits or anything that's super domain specific or sensitive. you'd be a fool to trust it completely but it's a much faster way to stand some things up biceps crimes fucked around with this message at 04:34 on Apr 4, 2023 |
# ? Apr 4, 2023 04:31 |
|
asur posted:Am I misunderstanding how people are asking ChatGPT to write tests? If you give it the code and ask it to write tests then you aren't actually testing your code as it will match any mistakes with incorrect tests. I guess in theory you can review the test code, but at least personally I'm way less likely to catch mistakes in a review as opposed to writing test cases where I know what should happen and have to write that out. I read the tests that it spits out and even intentionally break a couple to make sure (the same way I intentionally break my own tests) It really just saves a lot of typing time.
|
# ? Apr 4, 2023 05:37 |
|
I understand now. ChatGPT is for people who don't naturally type at 140wpm.
|
# ? Apr 4, 2023 06:06 |
|
We had one of those meetings the other day where teams can demo their new tech stuff. One of the teams was like: "So we tried using ChatGPT for blah blah blah" At the end: "Any questions?" "The company published an internal statement recently saying we should not put user info or internal information such as code into ChatGPT, since we don't know what exactly they might do with that. How did you deal with that?" *crickets chirping*
|
# ? Apr 4, 2023 07:52 |
|
Yeah I've been really clear with my team not to touch any AI stuff at work until management clear it. All these systems currently say they might read inputs, it's a pretty huge company IP violation that a lot of Devs are just ignoring.
Mega Comrade fucked around with this message at 10:37 on Apr 4, 2023 |
# ? Apr 4, 2023 10:33 |
|
I’m in a special group at work right now that’s approved for usage. But I don’t know what policy they’re cooking up for the whole company, I have no idea how they foolproof it. Then again, I constantly see people breaking current policies with their unapproved browser extensions, unapproved password managers, productivity apps. Hell, I’ve seen some people dump entire files of code into suspect webpages that format code. I’m glad I’m not in security, it’d be very depressing
biceps crimes fucked around with this message at 13:54 on Apr 4, 2023 |
# ? Apr 4, 2023 13:48 |
|
My company is working on one too. I suspect they will go with "only use these ones and in these ways" which will be established company offerings with actual telemetry options, ie GitHub copilot (business) etc. Although copilot still has that whole licensing lawsuit hanging over it, so maybe not even that.
|
# ? Apr 4, 2023 14:01 |
|
The killer feature is if you can train it on only your own company's code, allowing you to automate the really stupid incantations that are nonetheless idiomatic in your codebase, while also ducking copyright concerns about other people's code being used as a source.
|
# ? Apr 4, 2023 14:06 |
|
Would that be useful though? A copilot that can only suggest code that's already exists in your products? Most of the code across our products sucks. And it wouldn't work for new features that come out in the language you use. I think new licenses that allow or disallow training are needed. But don't know how you solve the problem with the current models. Mega Comrade fucked around with this message at 14:24 on Apr 4, 2023 |
# ? Apr 4, 2023 14:19 |
|
Code substantially similar to code that already exists elsewhere is the only thing an LLM can actually write, so explicitly limiting it to that is not as big a constraint as it initially sounds like.
|
# ? Apr 4, 2023 14:58 |
|
Jabor posted:Code substantially similar to code that already exists elsewhere is the only thing an LLM can actually write, so explicitly limiting it to that is not as big a constraint as it initially sounds like. I wonder though how much training code is needed. Do you need like 1M+ LOC, or is a small 50k LOC project enough to be useful.
|
# ? Apr 4, 2023 15:15 |
|
waiting for the tipping point as developers shift to ChatGPT where there isn't enough StackOverflow data to pull from on any new releases and people have to start reading documentation
|
# ? Apr 4, 2023 16:21 |
|
|
# ? May 11, 2024 15:17 |
|
Fellatio del Toro posted:waiting for the tipping point as developers shift to ChatGPT where there isn't enough StackOverflow data to pull from on any new releases and people have to start reading documentation There are already versions of ChatGPT-based bots that basically just ingest entire git repos and let you interrogate them.
|
# ? Apr 4, 2023 17:34 |