I mean at this point just commit to the fraud and pay someone who actually knows how to code to take your exam for you.
I remember so little from my studies I do tend to wonder if it would really have cheating to… er… cheat. Higher education was like this horrendous ordeal where I had to perform insane memorisation tasks between binge drinking, and all so I could get my foot in the door as a dev and then start learning real skills on the job (e.g. “agile” didn’t even exist yet then, only XP. Build servers and source control were in their infancy. Unit tests the distant dreams of a madman.)
The bullshit is that anon wouldn’t be fsked at all.
If anon actually used ChatGPT to generate some code, memorize it, understand it well enough to explain it to a professor, and get a 90%, congratulations, that’s called “studying”.
Professors hate this one weird trick called “studying”
Yeah, if you memorized the code and it’s functionality well enough to explain it in a way that successfully bullshit someone who can sight-read it… You know how that code works. You might need a linter, but you know how that code works and can probably at least fumble your way through a shitty 0.5v of it
deleted by creator
virtual machine
Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.
If we’re talking about freshman CS 101, where every assignment is the same year-over-year and it’s all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his “explanations”, but they’re probably tired from their endless stack of work, so why bother?
If we’re talking about a 400 level CS class, this kid’s screwed and even someone who’s mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.
Are you guys just generating insanely difficult code? I feel like 90% of all my code generation with o1 works first time? And if it doesn’t, I just let GPT know and it fixes it right then and there?
the problem is more complex than initially thought, for a few reasons.
One, the user is not very good at prompting, and will often fight with the prompt to get what they want.
Two, often times the user has a very specific vision in mind, which the AI obviously doesn’t know, so the user ends up fighting that.
Three, the AI is not omnisicient, and just fucks shit up, makes goofy mistakes sometimes. Version assumptions, code compat errors, just weird implementations of shit, the kind of stuff you would expect AI to do that’s going to make it harder to manage code after the fact.
unless you’re using AI strictly to write isolated scripts in one particular language, ai is going to fight you at least some of the time.
I asked an LLM to generate tests for a 10 line function with two arguments, no if branches, and only one library function call. It’s just a for loop and some math. Somehow it invented arguments, and the ones that actually ran didn’t even pass. It made like 5 test functions, spat out paragraphs explaining nonsense, and it still didn’t work.
This was one of the smaller deepseek models, so perhaps a fancier model would do better.
I’m still messing with it, so maybe I’ll find some tasks it’s good at.
from what i understand the “preview” models are quite handicapped, usually the benchmark is the full fat model for that reason. the recent openAI one (they have stupid names idk what is what anymore) had a similar problem.
If it’s not a preview model, it’s possible a bigger model would help, but usually prompt engineering is going to be more useful. AI is really quick to get confused sometimes.
It might be, idk, my coworker set it up. It’s definitely a distilled model though. I did hope it would do a better job on such a small input though.
My first attempt at coding with chatGPT was asking about saving information to a file with python. I wanted to know what libraries were available and the syntax to use them.
It gave me a three page write up about how to write a library myself, in python. Only it had an error on damn near every line, so I still had to go Google the actual libraries and their syntax and slosh through documentation
Can not confirm. LLMs generate garbage for me, i never use it.
I just generated an entire angular component (table with filters, data services, using in house software patterns and components, based off of existing work) using copilot for work yesterday. It didn’t work at first, but I’m a good enough software engineer that I iterated on the issues, discarding bad edits and referencing specific examples from the extant codebase and got copilot to fix it. 3-4 days of work (if you were already familiar with the existing way of doing things) done in about 3-4 hours. But if you didn’t know what was going on and how to fix it you’d end up with an unmaintainable non functional mess, full of bugs we have specific fixes in place to avoid but copilot doesn’t care about because it doesn’t have an idea of how software actually works, just what it should look like. So for anything novel or complex you have to feed it an example, then verify it didn’t skip steps or forget to include something it didn’t understand/predict, or make up a library/function call. So you have to know enough about the software you’re making to point that stuff out, because just feeding whatever error pops out of your compiler may get you to working code, but it won’t ensure quality code, maintainability, or intelligibility.
A lot of people assume their not knowing how to prompt is a failure of the AI. Or they tried it years ago, and assume it’s still as bad as it was.
Garbage for me too except for basic beginners questions
deepseek rnows solid, autoapprove works sometimes lol
i guess the new new gpt actually makes code that works on the first time
You mean o3 mini? Wasn’t it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.
03 something… i think the bigger version….
but, i saw a video where it wrote a working game of snake, and then wrote an ai training algorithm to make an ai that could play snake… all of the code ran on the first try….
could be a lie though, i dunno….Asking it to write a program that already exists in it’s entirety with source code publicly posted, and having that work is not impressive.
That’s just copy pasting
he asked it by describing the rules of the game, and then asked it to write and ai to learn the game….
it’s still basic but not copy pasta
o3 yes perhaps, we will see then. Would be amazing.
isn’t it kinda dumb to have coding exams that aren’t open book? if you don’t understand the material, on a well-designed test you’ll run out of time even with access to the entire internet
when in the hell would you ever be coding IRL without access to language documentation and the internet? isn’t the point of a class to prepare you for actual coding you’ll be doing in the future?
disclaimer did not major in CS. but did have a lot of open book tests—failed when I should have failed because I didn’t study enough, and passed when I should have passed because the familiarity with the material is what allows you to find your references fast enough to complete the test
Most of my CS exams in more advanced classes were take home. Well before the internet though. They were some of the best finals I ever took.
Assignments involved actual coding but exams were generally pen and paper when I got my degree. If a question involved coding, they were just looking for a general understanding and didn’t nitpick syntax. The “language” used was more of a c+±like pseudocode than any real specific language.
ChatGPT could probably do well on such exams because making up functions is fair game, as long as it doesn’t trivialize the question and demonstrates an overall understanding.
I mean, I don’t know how to code but I imagine it’s the same as with other subjects. like not being able to use a calculator during some math tests. The point of the examination is for you to demonstrate you know and understand the concepts. It’s not for you to be tested in the same way you would be in the real world.
Yes It is laziness on the teacher’s part
I know people that used to work in programming with zero internet connection… this was ~10 years ago… never underestimate the idiocy of companies. P.s. it wasnt even a high security job, the owners were just paranoid boomers.
With that said, with a decent IDE with autocomplete, you can get by a lot without documentation. Its ussually the niche stuff that you need to look up on how to do it.
You’d have a wall full of documentation before internet was a common source of data.
pay for school
do anything to avoid actually learning
Why tho?
Job
Losing the job after a month of demonstrating you don’t know what you claimed to is not a great return on that investment…
It is, because you now have the title on your resume and can just lie about getting fired. You just need one company to not call a previous employer or do a half hearted background check. Someone will eventually fail and hire you by accident, so this strategy can be repeated ad infinitum.
No actual professional company or job of value is not going to check your curriculum or your work history… So like sure you may get that job at quality inn as a night manager making $12 an hour because they didn’t fucking bother to check your resume…
But you’re not getting some CS job making $120,000 a year because they didn’t check your previous employer. Lol
Sorry, you’re not making it past the interview stage in CS with that level of knowledge. Even on the off chance that name on the resume helps, you’re still getting fired again. You’re never building up enough to actually last long enough searching to get to the next grift.
I am sorry that you believe that all corporations have these magical systems in place to infallibly hire skilled candidates. Unfortunately, the idealism of academia does not always transfer to the reality of industry.
…you stopped reading halfway through my comment didn’t you?
Idiot.
This person is LARPing as a CS major on 4chan
It’s not possible to write functional code without understanding it, even with ChatGPT’s help.
where’s my typo
;
Giving me flashbacks to a college instructor that marked my entire functioning code block, written on paper, as wrong because I did not clearly make a ; on one line of about 100 lines. I argued that a compiler would mark that in the real world, but he countered with "It still won’t run without that ; " That made me rethink my career path in CS. Fuck that guy.
That was/is one of my biggest complaints about CS courses: the horrendous, uncontrolled, inconsistent-across-course/instructor/TA mixture of concept and implementation skills expected of the students.
Ultimately you need to develop both to be successful in a CS/Software Dev/Programming career, but I’ve watched so fucking many people fail to progress in courses and learning because they’re trying to learn both the concept and how it needs to be formatted in the class specific language’s syntax at the same time. They hit a roadblock in one and the whole thing comes tumbling down because if your code doesn’t work you can’t just work around it to get the other parts done and then come back later. Being able to stub something out to do that requires skills that they’re taking the class to learn in the first place!
Minor mistakes with syntax creates a situation where they can’t get a working example of the concept to play around with. So then they don’t have something hands on to use to cement their conceptual understanding.
Minor mistakes with the conceptual understanding lead to a complete inability to understand why the syntax works (if it even does) to create an example of the concept, leaving them high and dry when the class asks them to think outside the box and make something new or modified based off what came before.
I’ve worked as a Lab Assistant (TA who doesn’t grade) for intro to programming courses. Due to transfer credit shaningans, combined with a “soon to retire” professor getting saddled with the bueracratic duties for their whole department, I ended up running lectures for an intermediate course I effectively had to take twice. I regularly led study sessions in college for my friends in programming classes. Even now, I’m the most experienced programmer on my team of sysadmins/engineers at work and regularly assist co-workers with scripts when I’m not coding custom automations and system integrations.
So I have experience teaching and using this shit.
In my opinion, courses should be split into two repeatedly alternating parts: concept and implementation/syntax. They are separate skill sets.
You need a certain set of skills to be able to communicate. You need a different set of skills to do so in a specific language.
Plus, classwork needs to better mimic real world situations. Even crazy motherfuckers using sed or nano to code should be using linters in this day and age, and no one should be working in an environment where they only have one chance to get it 100% right with no means of testing.
You would think eventually some of it would sink in. I mean I use LLMs to write code all the time but it’s very rarely 100% correct, often with syntax errors or logic problems. Having to fix that stuff is an excellent way to at least learn the syntax.
U underestimate the power of the darkside, how powerful ctrl+c ctrl+v is young padawan
If you copy and paste from ChatGPT your code won’t compile.
You need to know what the peices of code do and how to peice them together to make it work.
Which is kind of impossible to do without understanding it
Since version 4 it has no problem generating working code. The question is how complex the code can get etc. But currently with o1 (o3 mini perhaps a bit less) a dozen functions with 1000 lines of code are really possible without a flaw.
If I tell ChatGPT “write me a program in python that does X, Y, and Z” it will not output code that can be compiled or ran without editing
deserved to fail
Probably promoted to middle management instead
He might be overqualified
https://nmn.gl/blog/ai-illiterate-programmers
Relevant quote
Every time we let AI solve a problem we could’ve solved ourselves, we’re trading long-term understanding for short-term productivity. We’re optimizing for today’s commit at the cost of tomorrow’s ability.
And also possibly checking in code with subtle logic flaws that won’t be discovered until it’s too late.
I like the sentiment of the article; however this quote really rubs me the wrong way:
I’m not suggesting we abandon AI tools—that ship has sailed.
Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.
Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.
Why would that ship have sailed?
Because the tools are here and not going anyway
then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?
The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation
You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do
Because the tools are here and not going anyway
Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.
The actually useful shit LLMs can do
Which is?
The actually useful shit LLMs can do
Which is?
Waste energy and pollute the environment? I can relate… not useful, tho
Because the tools are here and not going anyway
I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.
The actually useful shit LLMs can do.
I’ll be the first to admit I don’t know many use cases of LLMs. I don’t use them, so I haven’t explored what they can do. As my experience is simply my own, I’m certain there are uses of LLMs that I hadn’t considered. I’m personally of the opinion that I won’t gain anything out of LLMs that I can’t get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.
My 2 cents on this.
I never used LLMs until recently; not for moral or ideological reasons but because I had never felt much need to, and I also remember when ChatGPT originally came out it asked for my phone number, and that’s a hard no from me.
But a few months ago I decided to give it another go (no phone number now), and found it quite useful sometimes. However before I explain how I use it and why I find it useful, I have to point out that this is only the case because of how crap search engines are nowadays, which pages and pages of trash results and articles.
Basically, I use it as a rudimentary search engine to help me solve technical problems sometimes, or to clear something up that I’m having a hard time finding good results for. In this way, it’s also useful to get a rudimentary understanding of something, especially when you don’t even know what terms to use to begin searching for something in the first place. However, this has the obvious limitation that you can’t get info for things that are more recent than the training data.
Another thing I can think of, is that it might be quite useful if you want to learn and practice another language, since language is what it does best, and it can work as a sort of pen pal, fixing your mistakes if you ask it to.
In addition to all that, I’ve seen people make what are essentially text based adventure games that allow much more freedom than traditional ones, since you don’t have to plan everything yourself - you can just give it a setting and a set of rules to follow, and it will mould the story as the player progresses. Basically DnD.
Basically, I use it as a rudimentary search engine
The other day I had a very obscure query where the web page results were very few and completely useless. Reluctantly I looked at the Google LLM-generated “AI Overview” or whatever it’s called. What it came up with was completely plausible, but utter bullshit. After a quick look I could see that it had taken text that answered a similar question, and just weaved some words I was looking for into the answer in a plausible way. Utterly useless, and just ended up wasting my time checking that it was useless.
Another thing I can think of, is that it might be quite useful if you want to learn and practice another language
No, it’s terrible at that. Google’s translation tool uses an LLM-based design. It’s terrible because it doesn’t understand the context of a word or phrase.
For instance, a guy might say to his mate: “Hey, you mad cunt!”. Plug that into an LLM translation and it you don’t know what it might come up with. In some languages it actually translates to something that will translate back to “Hey, you mad cunt”. In Spanish it goes for “Oye, maldita sea”, which is basically “Hey, dammit” Which is not the sense it was used at all. Shorten that to “Hey, you mad?” and you get the problem that “mad” could be crazy or it could be angry, depending on the context and the dialect. If you were talking with a human, they might ask you for context cues before translating, but the LLMs just pick the most probable translation and go with that.
If you use long conversational interface, it will get more context, but then you run into the problem that there’s no intelligence there. You’re basically conversing with the equivalent of a zombie. Something’s animating the body, but the spark of life is gone. It is also designed never to be angry, never to be sad, never to be jealous, it’s always perky and pleasant. So, it might help you learn a language a bit, but you’re learning the zombified version of the language.
Basically DnD.
D&D by the world’s worst DM. The key thing a DM brings to a game is that they’re telling a story. They’ve thought about a plot. They have interesting characters that advance that plot. They get to know the players so they know how to subvert their expectations. The hardest things for a DM to deal with is a player doing something unexpected. When that happens they need to adjust everything so that what happens still fits in with the world they’re imagining, and try to nudge the players back to the story they’ve built. An LLM will just happily continue generating text that meets the heuristics of a story. But, that basically means that the players have no real agency. Nothing they do has real consequences because you can’t affect the plot of the story when there’s no plot to begin with.
And, what if you just use an LLM for dialogue in a game where the story/plot was written by a human. That’s fine until the LLM generates a plausible dialogue that’s “wrong”. Like, say the player is investigating a murder and talks to a guard. In a proper game, the guard might not know the answer, or might know the answer and lie, or might know the answer but not be willing / able to tell the player. But, if you put an LLM in there, it can generate a plausible response from a guard, and that plausible response might match one of those scenarios, but it doesn’t have a concept that this guard is “an honest but dumb guard” or “a manipulative guard who was part of the plot”. If the player comes and talks to the guard again, will they still be that same character, or will the LLM generate more plausible dialogue from a guard, that goes against the previous “personality” of that guard?
There are different LLMs and some are better than others at some things. I also would not word a question for ChatGPT in the same way I would word a search query. I also never found the LLM provided search results useful, but I have found ChatGPT very useful when I want to figure out something about a technical thing. And off course it’s not gonna be perfect, but you can use it pretty well as a starting off point.
Translation is different from conversation, I can speak English and Portuguese fine, but having to translate is a pain in the ass. And even then there are better translators than Google’s - in my experience DeepL is much better. But going back to the topic, LLMs are good at generating text, that is what they do, and that is why they are good for practicing a language. And pen pal programs to learn a language don’t typically involve getting angry or jealous, the objective is just talking about your day and basic interests, and forcing you to use and think in the language. You won’t always have access to people, and LLMs can be a good replacement.
As for DnD/text adventure, again they’re not perfect, but I think you would be surprised how good some can be and how much story they can have to work with. And they’re obviously not meant to replace a real game with real people, it’s just a bit of entertainment. Some people might not have friends who like to play DnD, for example.
Essentially what I’m saying is: yes, LLMs are not perfect and don’t fit every scenario, but they don’t have to. Sometimes you don’t need a drill, when a screwdriver will do.
Hey that sounds exactly like what the last company I worked at did for every single project 🙃
Not even. Every time someone lets AI run wild on a problem, they’re trading all trust I ever had in them for complete garbage that they’re not even personally invested enough in to defend it when I criticize their absolute shit code. Don’t submit it for review if you haven’t reviewed it yourself, Darren.
My company doesn’t even allow AI use, and the amount of times I’ve tried to help a junior diagnose an issue with a simple script they made, only to be told that they don’t actually know what their code does to even begin troubleshooting…
“Why do you have this line here? Isn’t that redundant?”
“Well it was in the example I found.”
“Ok, what does the example do? What is this line for?”
Crickets.
I’m not trying to call them out, I’m just hoping that I won’t need to familiarize myself with their whole project and every fucking line in their script to help them, because at that point it’d be easier to just write it myself than try to guide them.
“Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”
LLMs are absolutely not able to create wonders on par with the pyramids. They’re at best as capable as a junior engineer who has read all of Stack Overflow but doesn’t really understand any of it.
Precisely. If you train by lifting stones you can still use the lever later, but you’ll be able to lift even heavier things by using both your new strength AND the leaver’s mechanical advantage.
By analogy, if you’re using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can’t lift shit if your lever breaks or doesn’t fit under a particular rock or whatever.
Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.
I use plenty of tools to assist my programming work. But I learn what I’m doing and why first. Then once I have that experience if there’s a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscode’s snippet features are fucking amazing when you build your own snips well, btw).
If you don’t understand how a lever works, then it’s a problem. Should we let any person with an AI design and operate a nuclear power plant?
Actually… Yes? People’s health did deteriorate due to over-reliance on technology over the generations. At least, the health of those who have access to that technology.
“If my grandma had wheels she would be a bicycle. We are optimizing today’s grandmas at the sacrifice of tomorrow’s eco friendly transportation.”
This guy’s solution to becoming crappier over time is “I’ll drink every day, but abstain one day a week”.
I’m not convinced that “that ship has sailed” as he puts it.
Capitalism is inherently short-sighted.
Nahhh, I never would have solved that problem myself, I’d have just googled the shit out of it til I found someone else that had solved it themselves
Unless they’re being physically watched or had their phone sequestered away, they could just pull it up on a phone browser and type it out into the computer. But if they want to be a programmer they really should learn how to code.
I work in a dept. at a university that does all the proctored exams. None of that technology is allowed in the exam rooms. They have to put their watch, phone, headphones, etc in a locker beforehand. And not only are they being watched individually, the computer is locked down to not allow other applications to open and there are outgoing firewalls in place to block most everything network wise. I’m not saying it’s impossible to cheat, but it’s really really hard.
Some instructors still do in class exams, which would make it easier, but most opted for the proctored type exams especially during Covid.
Why would you sign up to college to willfully learn nothing
To get the peice of paper that lets you access a living wage
My Java classes at uni:
Here’s a piece of code that does nothing. Make it do nothing, but in compliance with this design pattern.
When I say it did nothing, I mean it had literally empty function bodies.
Yeah that’s object oriented programming and interfaces. It’s shit to teach people without a practical example but it’s a completely passable way to do OOP in industry, you start by writing interfaces to structure your program and fill in the implementation later.
Now, is it a good practice? Probably not, imo software design is impossible to get right without iteration, but people still use this method… good to understand why it sucks
So what? You also learn math with exercises that ‘do nothing’. If it bothers you so much add some print statements to the function bodies.
I actually did do that. My point was to present a situation where you basically do nothing in higher education, which is not to say you don’t do/learn anything at all.
Mine were actually useful, gotta respect my uni for that. The only bits we didn’t manually program ourselves were the driver and the tomcat server, near the end of the semester we were writing our own Reflections to properly guess the object type from a database query.
A lot of kids fresh out of highschool are pressured into going to college right away. Its the societal norm for some fucking reason.
Give these kids a break and let them go when they’re really ready. Personally I sat around for a year and a half before I felt like “fuck, this is boring lets go learn something now”. If i had gone to college straight from highschool I would’ve flunked out and just wasted all that money for nothing.
Yeah I remember in high school they were pressuring every body to go straight to uni and I personally thought it was kinda predatory.
I wish I hadn’t went straight in, personally. Wasted a lot of money and time before I got my shit together and went back for an associates a few years later.
Its hard to make wise decisions when you’re basically a kid at that age.
To get hired.
A diploma ain’t gonna give you shit on its own
So does breathing.
Because college is awesome and many employers use a degree as a simple filter any way
Not a single person I’ve worked with in software has gotten a job with just a diploma/degree since like the early 2000s
Maybe it’s different in some places.
Many HR departments will automatically kick out an application if it doesn’t have a degree. It’s an easy filter even if it isn’t the most accurate.
Yeah fair point, but then how are you going to get the job if you’re completely incompetent at programming 🤔
I don’t think you can get the CS degree with being completely incompetent. A bunch of interviews I had were white boarding the logic, not actual coding. Code is easy if you know the logic.
Just use AI bro
“Necessary, but not sufficient” sums up the role of a degree for a lot of jobs.
We are saying the same thing. Degree > diploma for jobs. Go to college, get degree
I meant any form of qualification. Sure it helps, but the way you get the job is by showing you can actually do the work. Like a folio and personal projects or past history.
Art? Most programming? “Hard skills” / technical jobs… GOOD jobs. Sure. But there’s plenty of degrees & jobs out there. Sounds like you landed where you were meant to be, alot of folks go where opportunity and the market takes them
Its probably a regional difference. Here in AU, you can be lucky and land a few post grad jobs if you really stood out. Otherwise you’re entirely reliant on having a good folio and most importantly connections.
To get a job so you don’t starve
I don’t think you can memorize how code works enough to explain it and not learn codding.
It’s super easy to learn how algorithms and what not work without knowing the syntax of a language. I can tell you how a binary search tree works, but I have no clue how to code it in Java because I’ve never used Java.
And similarly, i could read code in a language I dont know, understand what it does and how it works even if I don’t know the syntax well enough to write it myself
Yeah, exactly. At least any fairly modern language. I don’t think I could just pick up assembly and read it without the class I took. Heck, I don’t think I could read it anymore now that it’s been several years since that class.
I mean same, but you can look to the official docs for like what a loop or queue looks like
Not during a test. But maybe in those 20 hours they have.
Oh right I forgot about closed book tests. Been a while
Haven’t taken a cert in a while either?
You’d think that, but I believe you are underestimating people’s ability to mindlessly memorize stuff without learning it.
It’s what we’re trained to do throughout our education system.
I have a hard time getting mad about it considering it’s what we told them to do from a very young age.
I’m a full stack polyglot and tbh I couldn’t program in some languages without reference docs / LLM even though I ship production code in those language all the time. Memorizing all of the function and method names and all of the syntax/design pattern stuff is pretty hard especially when it’s not really needed in contemporary dev.
Yeah a doctor has to read up on a disease in a book when they encounter it. Completely normal
I’m pretty sure chatgpt just tells you how it works, so they probably just memorized what it said.
Exactly my thought
If it’s the first course where they use Java, then one could easily learn it in 21 hours, with time for a full night’s sleep. Unless there’s no code completion and you have to write imports by hand. Then, you’re fucked.
If there’s no code completion, I can tell you even people who’s been doing coding as a job for years aren’t going to write it correctly from memory. Because we’re not being paid to memorize this shit, we’re being paid to solve problems optimally.
Also get paid extra to not use java
My undergrad program had us write Java code by hand for some beginning assignments and exams. The TAs would then type whatever we wrote into Eclipse and see if it ran. They usually graded pretty leniently, though.
There’s nobody out there writing “commercial” code in notepad. It’s the concepts that matter, not the spelling, so if OP got a solid grasp on those from using GPT, he’ll probably make it just fine
Perfectly articulated.
My first programming course (in Java) had a pen and paper exam. Minus points if you missed a bracket. :/
Haha same. God that was such a shit show. My hand writing is terrible lmao
It was the same for the class I took in high school. I remember the teacher saying that its to make sure we actually understand the code we write, since the IDE does some of the work for you.
I got -30% for not writing comments for my pen and paper java final.
Somehow it just felt a bit silly to do, I guess
Remember having to use (a modified version of?) quincy for C. Trying to paste anything would put random characters into your file.
Still beats programming on paper.
generate code, memorize how it works, explain it to profs like I know my shit.
ChatGPT was just his magic feather all along.
Dumbo reference