This is the way. No human is capable of always being the best version of themselves
It’s terribly sad when such great minds are taken relatively early from such horrible diseases.
He seems like someone who always made the best of any situation. After a stint in prison, I’m sure he just wanted to keep doing his best, which led him towards things that weren’t as controversial. Either way, we all have plenty to learn from his works.
This is the way. Everything ChatGPT produces for me gets tested and debugged here.
I’ve found that you need to be very careful when asking it to modify things it produced directly without making significant changes to the regex it provides. Once I get to the 3rd or 4th iteration of asking it to modify previous responses I’ve found the likelihood that it starts hallucinating to increase dramatically. The best solution I’ve found to this is to put your entire request in a single prompt that walks it through all requirements step-by-step.
This is where I go to validate the work of ChatGPT. The debugging capabilities in that site are wonderful.
It’s not that I’m incapable of evaluating regex, but rather the mental burden of evaluating complex regex statements and determining their purpose can be time-consuming. Why take 20 minutes to understand some regex when ChatGPT can do it in 20 seconds?
I often need to deal with half a dozen different programming languages in any day/week and the context switching can be difficult at times. When you’ve spent all day switching between JavaScript, Python, and YAML and suddenly need to draft some Regex, tools like ChatGPT can help immensely at reducing the mental burden of switching gears.
I typically try 3.5 first and switch to 4 if the results aren’t great. 3.5 typically handles basic use cases quite well, for example, writing regex that detects jira ticket naming nomenclature. For more complex things, I go to 4.
It sometimes gets things wrong, but I’ve also found that just saying “that didn’t work” gets it to reevaluate for more complex situations
I agree with this, and I think it may be fairly easy to catch most of this with some simple conventions:
This would create some gray area for things like the last link in your list, but it should catch the majority of problems. I wonder if this could be resolved with a (new?) Q&A community for people to ask questions that are specific to their situation. That would enable c/programming to focus more on conversational topics.
Y’all need to get yourselves some PR review automation in place. Stop wasting time on trivial reviews and requesting changes for common problems so that when you ping a colleague for a code review, they know it’s important rather than a simple request for a thumbs up.
It might be because I’ve been using GitHub more frequently in recent months, but I have definitely noticed more disruptions than normal. Our engineering team seems to mention issues almost weekly now, when they used to be fairly rare in the past.
A lot of organizations seem to focus on tailing indicators such as lines of code written, or the number of bugs found, and I think that’s part of what fuels the perception that being an engineering leader is one of the most difficult roles in modern companies because they don’t paint an accurate picture of how things are today.
The first thing is to get data that tracks key performance metrics. Many organizations often start with DORA metrics to create “slides for the board” that show the overall health of the engineering organization. This is a great place to start, but you can take this further by incorporating your project tracking into the data to measure how you allocate resources across the engineering function and whether or not that allocation is enough to meet product delivery timelines. There are a handful of tools out there that make this easy, like Sleuth and LinearB. A quick search should surface other solutions for this too.
Oh wow, thanks. I didn’t realize that making this an image post got rid of the link