different people gunna say different things about what they think it means and how to go about it. generally I'd say that learning to code is about learning the way to think about solving problems. The language itself is essentially nothing more than symatics. The thought of taking problem X and breaking it into actionable items A B C, that's where "learning to code" exists. imo. Some people naturally are better at doing this part of the task. Others will need a lot of unlearning to get there. This generalization is muddied a bit by the specifics of different languages. in python you might break X down into rev(CAT); T->B; Delay B; instead of building A B and C from the ground up (i hope this imagry makes some amount of sense) The reasons for those differences are the parts of the language that make it good or bad at various things. Some languages have entirely different views about information, such as lisp. Many modern programmers will dip their toes into learning lisp and have a revelatory experiance which changes how they view everything. And no i don't mean everything to do with programming but literally everything. This is an important point. Learning to program any language at it's core is a way of /thinking/. Just as really learning a natural language changes how you think about concepts, objects, etc. learning a programming language changes how you think about processes, the ways to break them down, and the conceptual atoms of the representation of those problems. In C and it's progeny they break down to bits and bytes, memory addresses, etc. The functionality of the language and it's component representations are tied directly to how the hardware literally works. memory is literally a list of numbers, they exist physically on your ram. The OS gets in the way of this being a clean mapping, as does modern architechtures. But when it was made, memory was a chip on a board. right there *points* that's where the code is. the address is 0x0000000? that's literally at the top left corner of the chip. break out the microscope and we can literally see them on the silicon. C was created in such an environment and was intended to be nothing more than an easier way of talking about those very real physical switches. lisp adds a layer of abstraction between the hardware and the language. It's more "human" centric, it does the work of taking those ideas and turning them into machine instructions, and just don't worry about it. But it is very simple and exceptionally extensible. Forth is very lispy, to the point that an argument could be made that forth is just a different way of representing lisp. or that lisp is a different way of representing forth. point is they have very small cores, which are the only real abstraction layer between the hardware where real transistors live, and the conceptual problem space. with C the code isn't just conceptual problem solving, it's part that and part hardware language. anyway. i'd say for a music analogy, if you learn C, it's like learning classical guitar, C++/C#/Java/etc might be something like a 7 stringed electric. It's got more shit going on, tone controls, trem, the extra string, the feel, weight, etc. But if you know classical guitar you can reason your way through the specifics pretty well. But an expert shredder will know you are from classical guitar world. python, go, and to lesser extent javascript, might be like a violin. music theory knowledge, rythm, intervals, etc all convert but where is are the frets, what's this long brush for? (looks up documentation for library x y z) something like "i can use such and such algo to solve this, how is it represented. Oh no semicolon there, or use this instead of that, how do you write functions again?" all the while missing huge parts of what makes that language good for other things without some kind of overview talking about it, or some playing around to figure out various parts of it. within hours someone who knows C can write simple code in python, and a week or two be fairly competent. but the way of thinking C instilled isn't the same as someone who natively "groks" python. They would use different tools altogether, most of the time. Lisp/forth/haskell then might be electric keyboard + DAW. they very nature of played rhythm might not matter anymore for a player using it, what matters is the output rhythm after all. and so the entire way of composing and creating music is fundamentally different. as it is with lisp. learning it and going back to C likes, there will be moments of "it sure would be nice to do it the lisp way." just like you can easily make unplayable chords in a daw and might wish you could do it with a guitar.
learning programming the way you learn an instrument is probably possible. but... we are talking about something with mountains of complexity. It's much easier to do that with something like a commadore64 and it's programmer guide (like a guitar with a chord book) than it is with even a raspberry pi and the internet. modern OS', and hardware is full of mysterious black boxes. understanding VGA video signals is something pretty easy. you can look at some data sheets and mess with some wires to follow the timing diagrams to produce the picture you want. understanding LCD displays, driven by Vulkan 3D, X11 windowing, on linux? good fucking luck. I honestly don't think there is a single person who does know how that all works. sure some people know aspects of it well, but the more time you spend on any part the less you have for the others, and each part is so massive it becomes impossible. old crt teletypes had a analog signal for the video stream. well defined timeing diagrams for that which map fairly cleanly to the frame buffer. The frame buffer was written by decoding the screen's text buffer and looking up bitmap glyphs for each character. This lived in rom usually. The characters were binary numbers (lets just say ascii, altho there were several encodings, ascii was even then the predominant one). they lived literally in ram. the frame buffer as well was in ram, specifically the "video ram". there might be two of them so you could write to one and display the other. To change what was shown, you could quite literally change the value at the ram address of the character buffer, decoding the glyph (might be automatic) then swap which fb was being shown. you could do raster graphics by bypassing the character buffer and writing directly to the framebuffer. this shit was dead simple. not only could you understand it all, but it was almost expected that you would.
networking is complicated. at the most fundamental, if you want two computers to communicate you need some wires between them. with just 2 computers you might think one wire is sufficient. just send a stream of numbers to the other machine. but what if they use different clk speeds? how does computer B know where the signal starts and ends? what about random noise? or the signal being distorted, stretched out or squished together? (these things all can and do happen, and become nearly certain the longer the distance between the computers become). 2 wires then is minimal, one to have the clock used to know when to read each bit, and the other for the bit. and actually 4 because you don't want A talking over B on the same wire, and actual cat5/6 cables use 8. that is 2 pairs of 4, and each pair sends the signal and it's opposite, and each pair is twisted in the cable with a different number of twists per foot. This sending the opposite signal on the pair has electromagnetic explanations which lead to the signal being readable much longer distances away (since you can compare the level between the pair, rather than the absolute value of one wire) and the twists ensure that a sudden impulse on one wire doesn't cause an echo on the wire next to it (remember from highschool physics that an electric charge induces a magnetic field around it, and that magnetic fields will induce electric charges.) anyway. that's just the wires, but then there is ip protocol shit. because we aren't just slapping wires between 2 computers, but rather have an ass ton of them all connected together we need a way to specifiy which one we are talking to (ip address) we also need a way to specific what language we are talking (essentially port number, ie http is a language, it's typically port 80, so if a computer gets a connection at port 80 it will assume you http, if it has some http service it will decode the connection in http form and then reply similarly) but we also have the meta-language of tcp and udp, and these are essentially the packaging the data goes in. like a letter is handled differently by the mail carriers than a box, or a pallet (ship by car, or truck, train, or plane, overnight, or in 4 weeks, etc) there historically were many of these, but in practice most internet communication is done with tcp these days. It breaks messages into roughly equally sized containers, and labels each one so the order can be determined by the receiver. udp doesn't care about order. each message is sent as a packet by itself, and if i send A B C then D but because the routers between us you end up getting D C A, that's the order you will process them. TCP also has handling for if one of the parts goes missing, to resend it. UDP doesn't give a shit, B got lost? /oh well/ tcp was designed for packets to occasionally go missing, due to a router not being fast enough, or a collision, solar flare, downed wire, etc. but modern routers, kinda go overboard with their buffers, and collisions almost never happen anymore thanks to full-duplexed connections (in the old days, cat style dedicated in and out wires were uncommon, the in would also be the out, who spoke was often moderated algorithmically (such as token ring) or computers just sent messages whenever, but since you didn't constantly use the network it wasn't often a big deal) but ya, it's "rare" now to lose packets.
As far as "why build a shitty version of XYZ" well. simply put it is to learn how to not build shitty versions of things. I dunno how you learned bass, but most people probably tend to learn some riff they like, then once they have that experiment with modifying it in various ways. then maybe doing scales/triads etc or something equally well trodden making tetris (or pong, or pacman, or space invaders, etc) is learning a well known riff. sure you could just aimlessly pluck the strings until you stumble on something you like, but if you have an idea in mind, even one already tread it's generally easier to get to a place you want to be.
if programming games, it really is probably most optimal to speed run through the history of games to the branch point you want to go, if you /really/ want to have a good understanding. or just use a prebuilt engine + prefabs and kludge some hacky thing from there. that might be a bit like taking samples and ripping them apart for some song. not wrong, but also not the same as knowing how to play it.
but eh. at this point i become hypocritical. I have been wanting to godot for a while and was like, i'll do a break out, then a pacman, then a mario, then a metroid, then a doom, then a kanon, then finally the game i want to make. but uh. I have a paddle moving, ball bouncing and got to bricks and got side tracked halfway through making the sprite and havn't touched it since... ... (so like 40 minutes maybe... ... .. ... . )
3 Name: Anonymous2024-07-07 00:54
regarding turning completeness, this term has a pretty strict definition with fairly limited scope. it really only speaks about computability. some formal notation (programming languages meet the criteria) is deemed turing complete if it can simulate some other turing complete language, or can be shown to otherwise be able to compute any computable algorithm. honestly it's a really low bar. but it doesn't really have anything to do with how we use or talk about computers. turing completeness for instance has nothing to do with readouts, displays, input methods, sound, networking, storage capacity ( to be 100% turing complete requires infinite storage but in practice 99.99% turing complete happens at like 2kbits, which is why we don't really care), etc. a turing incomplete system could easily play games. and technique as the user experience goes, most games are turing incomplete, and so if we wrap it up into a box, and only interface with it as expected most game consoles are arguably not computers at all. this is academic semantics. what people really mean by turing complete with regards to programing languages is are they functionally equivalent? and generally speaking yes, so much so that much more interesting are the very few languages that manage not to be turing complete. just as a pocket knife and a katana are both equally "things that cut". ie., they are "cutting complete" but i'd much rather peel a potato with a pocket knife, and bisect a dog with a katana.
but again a language being turing complete doesn't mean it can do audio, write a file, print graphics, be used to write device drivers, build a game, etc. especially since in order to do most of those things you need to use operating specific kernel functions to access the hardware since the os keeps that shit behind a locked door. just try "poking" an address that doesn't "belong to you" on windows. well you can do it, but you have to be sneaky, and not trigger any of the many watchdogs. linux is at least a bit better, but even there it does stuff like changing the layout of device and user memory on boot every time in an effort to keep malicious actors from knowing where exactly stuff is. so you generally will have to use a language that has some kind of operating system translation layer. ie in godot you don't use win32 api calls or x11 calls to create a window, or anything like that. it's handled by the engine and abstracted away. languages like C++ or C, have you do the whole os specific song and dance for every supported os separately and then at compile swap out which specific song and dance it compiles into that specific build. something like java has a single song and dance for java you do, and the vm that runs the code is compiled for the specific os it is run on, so the same java executable runs everywhere the jvm has been ported to, UXN uses this approach, but the song and dance it has you do to do is fairly similar to what would be done on an old 8bit system. but again, the uxn vm translates this into os specific calls (opengl most of the time if i recall)
>>2 My interpretation of what he said was less like "why bother making shitty things" and more like "why learn this skill if all I've thought of doing with the skill is making copies of things"
5 Name: Anonymous2024-07-07 20:01
>>4 That's possible. I just thought he had left that unsaid, I just assumed that anyone who uses the command-line, regardless of thinking of themselves as a programmer (I certainly dont,) naturally discovers scripts to stuff repeated actions into. I later parameterize the shell scripts, sometimes end up re-writing them in python. It's often stuff I couldn't find on github.
6 Name: Anonymous2024-07-08 00:38
There's a lot stuff I want to create that requires code. I have some progress into some of these ideas, so trying to beat me to them might not go well for u (・`ω´・) - a "trash bin" for files but instead of deleting the trash every once in a while, a file's expiration date takes care of it - a file picker with thumbnails. unlike locked down GTK shit, you can convert webp/jxl to png for 4chan uploads with post-processing scripts - instead of using prompts to generate highly-specific porn on your computer, that technology could be used to find that specific pic you've saved on your computer 3 years ago (i have some bookmarks of projects attempting this) - visualize conversations in BBS threads as graphs - with so many new, specialized search engines out there, a database for storing a search engine's results to specific queries over time, to track the quality - a bespoke GUI application could be used to mass-compare and score search engines, it asks a user to re-order mixed results by preferences, and the search engines whose links humans would like to see higher receive more points - an mpv plug-in that lets you search subtitles like a transcript conveniently
generally I'd say that learning to code is about learning the way to think about solving problems. The language itself is essentially nothing more than symatics. The thought of taking problem X and breaking it into actionable items A B C, that's where "learning to code" exists. imo. Some people naturally are better at doing this part of the task. Others will need a lot of unlearning to get there.
This generalization is muddied a bit by the specifics of different languages. in python you might break X down into rev(CAT); T->B; Delay B; instead of building A B and C from the ground up (i hope this imagry makes some amount of sense)
The reasons for those differences are the parts of the language that make it good or bad at various things. Some languages have entirely different views about information, such as lisp. Many modern programmers will dip their toes into learning lisp and have a revelatory experiance which changes how they view everything. And no i don't mean everything to do with programming but literally everything. This is an important point. Learning to program any language at it's core is a way of /thinking/. Just as really learning a natural language changes how you think about concepts, objects, etc. learning a programming language changes how you think about processes, the ways to break them down, and the conceptual atoms of the representation of those problems. In C and it's progeny they break down to bits and bytes, memory addresses, etc. The functionality of the language and it's component representations are tied directly to how the hardware literally works. memory is literally a list of numbers, they exist physically on your ram. The OS gets in the way of this being a clean mapping, as does modern architechtures. But when it was made, memory was a chip on a board. right there *points* that's where the code is. the address is 0x0000000? that's literally at the top left corner of the chip. break out the microscope and we can literally see them on the silicon. C was created in such an environment and was intended to be nothing more than an easier way of talking about those very real physical switches. lisp adds a layer of abstraction between the hardware and the language. It's more "human" centric, it does the work of taking those ideas and turning them into machine instructions, and just don't worry about it. But it is very simple and exceptionally extensible. Forth is very lispy, to the point that an argument could be made that forth is just a different way of representing lisp. or that lisp is a different way of representing forth. point is they have very small cores, which are the only real abstraction layer between the hardware where real transistors live, and the conceptual problem space. with C the code isn't just conceptual problem solving, it's part that and part hardware language. anyway. i'd say for a music analogy, if you learn C, it's like learning classical guitar, C++/C#/Java/etc might be something like a 7 stringed electric. It's got more shit going on, tone controls, trem, the extra string, the feel, weight, etc. But if you know classical guitar you can reason your way through the specifics pretty well. But an expert shredder will know you are from classical guitar world. python, go, and to lesser extent javascript, might be like a violin. music theory knowledge, rythm, intervals, etc all convert but where is are the frets, what's this long brush for? (looks up documentation for library x y z) something like "i can use such and such algo to solve this, how is it represented. Oh no semicolon there, or use this instead of that, how do you write functions again?" all the while missing huge parts of what makes that language good for other things without some kind of overview talking about it, or some playing around to figure out various parts of it. within hours someone who knows C can write simple code in python, and a week or two be fairly competent. but the way of thinking C instilled isn't the same as someone who natively "groks" python. They would use different tools altogether, most of the time.
Lisp/forth/haskell then might be electric keyboard + DAW. they very nature of played rhythm might not matter anymore for a player using it, what matters is the output rhythm after all. and so the entire way of composing and creating music is fundamentally different. as it is with lisp. learning it and going back to C likes, there will be moments of "it sure would be nice to do it the lisp way." just like you can easily make unplayable chords in a daw and might wish you could do it with a guitar.
learning programming the way you learn an instrument is probably possible. but... we are talking about something with mountains of complexity. It's much easier to do that with something like a commadore64 and it's programmer guide (like a guitar with a chord book) than it is with even a raspberry pi and the internet. modern OS', and hardware is full of mysterious black boxes. understanding VGA video signals is something pretty easy. you can look at some data sheets and mess with some wires to follow the timing diagrams to produce the picture you want. understanding LCD displays, driven by Vulkan 3D, X11 windowing, on linux? good fucking luck. I honestly don't think there is a single person who does know how that all works. sure some people know aspects of it well, but the more time you spend on any part the less you have for the others, and each part is so massive it becomes impossible. old crt teletypes had a analog signal for the video stream. well defined timeing diagrams for that which map fairly cleanly to the frame buffer. The frame buffer was written by decoding the screen's text buffer and looking up bitmap glyphs for each character. This lived in rom usually. The characters were binary numbers (lets just say ascii, altho there were several encodings, ascii was even then the predominant one). they lived literally in ram. the frame buffer as well was in ram, specifically the "video ram". there might be two of them so you could write to one and display the other. To change what was shown, you could quite literally change the value at the ram address of the character buffer, decoding the glyph (might be automatic) then swap which fb was being shown. you could do raster graphics by bypassing the character buffer and writing directly to the framebuffer. this shit was dead simple. not only could you understand it all, but it was almost expected that you would.
networking is complicated. at the most fundamental, if you want two computers to communicate you need some wires between them. with just 2 computers you might think one wire is sufficient. just send a stream of numbers to the other machine. but what if they use different clk speeds? how does computer B know where the signal starts and ends? what about random noise? or the signal being distorted, stretched out or squished together? (these things all can and do happen, and become nearly certain the longer the distance between the computers become). 2 wires then is minimal, one to have the clock used to know when to read each bit, and the other for the bit. and actually 4 because you don't want A talking over B on the same wire, and actual cat5/6 cables use 8. that is 2 pairs of 4, and each pair sends the signal and it's opposite, and each pair is twisted in the cable with a different number of twists per foot. This sending the opposite signal on the pair has electromagnetic explanations which lead to the signal being readable much longer distances away (since you can compare the level between the pair, rather than the absolute value of one wire) and the twists ensure that a sudden impulse on one wire doesn't cause an echo on the wire next to it (remember from highschool physics that an electric charge induces a magnetic field around it, and that magnetic fields will induce electric charges.) anyway. that's just the wires, but then there is ip protocol shit. because we aren't just slapping wires between 2 computers, but rather have an ass ton of them all connected together we need a way to specifiy which one we are talking to (ip address) we also need a way to specific what language we are talking (essentially port number, ie http is a language, it's typically port 80, so if a computer gets a connection at port 80 it will assume you http, if it has some http service it will decode the connection in http form and then reply similarly) but we also have the meta-language of tcp and udp, and these are essentially the packaging the data goes in. like a letter is handled differently by the mail carriers than a box, or a pallet (ship by car, or truck, train, or plane, overnight, or in 4 weeks, etc) there historically were many of these, but in practice most internet communication is done with tcp these days. It breaks messages into roughly equally sized containers, and labels each one so the order can be determined by the receiver. udp doesn't care about order. each message is sent as a packet by itself, and if i send A B C then D but because the routers between us you end up getting D C A, that's the order you will process them. TCP also has handling for if one of the parts goes missing, to resend it. UDP doesn't give a shit, B got lost? /oh well/
tcp was designed for packets to occasionally go missing, due to a router not being fast enough, or a collision, solar flare, downed wire, etc. but modern routers, kinda go overboard with their buffers, and collisions almost never happen anymore thanks to full-duplexed connections (in the old days, cat style dedicated in and out wires were uncommon, the in would also be the out, who spoke was often moderated algorithmically (such as token ring) or computers just sent messages whenever, but since you didn't constantly use the network it wasn't often a big deal) but ya, it's "rare" now to lose packets.
anyway. this is a lot.