Why am I interested in informatics?
UPDATE on 31/05/2024: I've made
a YouTube video about what I think every politician should know about
the Internet. If you cannot open it, try downloading this
MKV file.
Before we begin, let me make it clear I am not one of those people who are saying programming is an easy way to make money. It is not, I will share my Reddit post about it here:
And do not expect universities to help you with programming significantly. (UPDATE on 01/01/2024: I am quite sure that, now I have graduated, I am actually less capable of being a useful programmer than I was back in 2018, when I started studying computer science at a university. Back in 2018, I didn't have a psychotic disorder and I had a lot more enthusiasm. And I am quite sure the university is at least partly responsible for me getting a psychotic disorder and losing enthusiasm. Even if my psychotic disorder was caused by me regularly taking Paracetamol and energy drinks (rather than the stress at the university), as my psychiatrist thinks... Would that have happened if I wasn't studying at a university or if I was studying something easier than computer science? I don't think so.) Let me tell you an anecdote from my experience studying computer science at the university: During the summer break, my father asked me which courses I have the next semester. I was naming the courses, and, when I said "object-oriented programming", my father interrupted me and said "How? Object-oriented programming? A really weird name. And, is there then some subject-oriented programming?" I said that, as far as I know, there isn't (only later did I find out that subject-oriented programming is indeed a thing). Then my father said: "I guess that's something that we historians can't understand. No, that, on Croatian language, that's not a good name.". After a few weeks, we met with some old friend of his. And my father told me: "So, tell him, what's the name of the course you have this semester.". So, I repeated: "object-oriented programming". And then my father asked him: "So, what does that name mean? Can you guess? Well, can you think of a name that's more stupid?". And the friend of my father said: "Well, I guess it's called object-oriented because programming is usually done by mathematicians and people from natural sciences. If programming were done by historians or poets, then it would be called subject-oriented programming.". I hope you get some idea how difficult it is to study computer science at the university. And it is important to understand that programming which is done at the university has little to do with programming in real life. Studying computer science at the university will familiarize you with computer science, electrical engineering (If you do not know what is electrical engineering, here is a quote from my professor Željko Hederić that I think wonderfully illustrates that: "When you try to spill water from a glass, the water will not start spilling all until some air gets into the glass, do we agree? Similarly, the electricity will not start flowing from a socket all until some magnetic field does not get into that socket. And that is, basically, what the Biot-Savart Law is saying.") and advanced mathematics (Much of the advanced mathematics is things you already know but in a very confusing language. Here is a joke I have written about it in Latin, based on a true story: Hodie in universitate (ego studeo scientiam computorum) docebamur de theoria unionum. Professor nobis explicabat, cur numerus cardinalis unionis unionum non semper sit summa (additio) cardinalum numerorum unionum: "Si hoc veritas esset, canis debet octo crura habere. Canis enim habet duo crura antica, duo crura posteriora, duo crura laeva, et duo crura dextera. Summa (additio) numerorum cardinalium earum unionum octo (quater bini) est, sed numerus cardinalis unionis earum unionum, sane, quattuor est.". And the rest of the university mathematics are some very difficult and rarely-useful things that can make your engineering marginally better. Take a look at the history of telephones. As far as I know, the only part of telephones where university-level mathematics is used is tone dialing, which uses Discrete Fourier Transform. And telephones functioned well before that using pulse dialing, university-level mathematics only made them marginally better.). By the way, professors will often be angry, and with an understandable reason. In real life, programming rarely involves advanced knowledge of even computer science (I think there are only two times in my projects where knowledge of computer science helped me significantly, when I used DFS algorithm to avoid stack overflow in my AEC-to-x86 compiler and when I used LCS algorithm from Dynamic Programming in my AEC-to-WebAssembly compiler to provide corrections for misspelled variable names; It is also possible this little knowledge of computer science that I have has guided me astray multiple times, as somebody on Discord suggested me a better solution for providing corrections for misspelled variable names than using LCS.), yet alone electrical engineering (I think my knowledge of electrical engineering never helped me with my projects) or mathematics (I think the only time my knowledge of advanced mathematics helped me is when implementing mathematical functions into my AEC-to-WebAssembly compiler. And note that, had I made that compiler properly, by targetting WebAssembly via LLVM instead of targetting WebAssembly directly, I would not have to do that.). When you program in the real world, you will probably spend most of your time with things such as getting your web-app to work in Internet Explorer, or something equivalent to that in parts of programming not related to web-apps (The thing that bothers me with making the compiler for my programming language right now is that, after I added the suggestions for misspelled variable names, the compiler crashes if it is compiled with Visual Studio or CLANG with some options on Windows, but apparently not if it is compiled using any other C++ compiler. What you will spend most of your time dealing with when programming are those annoying little things about programming tools that have little or nothing to do with the problem you are trying to solve. While stuff in programming languages such as portability, compiler warnings and exceptions are good things, you need to understand that, quite often, they are illusory and lulling programmers into false sense of security. A program that compiles and works in one C++ compiler can very well not even compile in another one, yet alone work. And it is also like that with Java. Java is supposed to be a compile-once-run-everywhere language, but a lot better description is compile-once-debug-everywhere. In theory, if you are triggering undefined behaviour, the C++ compilers should give you a warning. In reality, often enough for that to be a problem, none of them will end up warning you, as has happened in my case. In theory, if your program is misusing the C++ standard library, the C++ standard library should throw an exception, and that is what exceptions in programming languages are for. In reality, unless you know how the C++ standard library works in the smallest details, what will sooner or later happen is that your program will appear to work except for sometimes unpredictably having segmentation faults under one compiler, on an operating system under which the debugging tools you know how to use are not working. And there are reasons why C++ compilers and standard library are so permissive. The first reason is that there is a bunch of bad code already present in C++ projects, and compilers which complain about them will be perceived as faulty. I know how annoyed I feel when I try to build from source an older open-source C++ project with a modern C++ compiler and get tons of error messages, while using an older C++ compiler works. There is, unfortunately, an incentive not to fix ages-old bugs in programming languages and programming tools. The second reason is that C++ compilers and standard libraries need to make trade-offs between catching errors and being fast enough for correct programs that need high performance and fast compilation times. These are problems with computers that have nothing to do with computer science, yet alone mathematics or engineering, but which programmers need to deal with every day. Oh, and understand that you will sometimes have to apply fixes which you have no idea how they work. You will get a lot further by being empirical than by being a rationalist and only relying on things you understand. This is especially true when writing shell scripts, although it's also somewhat true in other types of programming. When writing a shell script to download, compile and run Analog Clock in AEC for x86, I ran into a problem that, on recent versions of Debian Linux, the linker insists that -lm, the option for linking in the math library, is put after the source files, and it outputs some confusing error message if it's put before the source files. A rationalist solution would be to try to implement the math functions that Duktape invokes yourself, like I've implemented them in my programming language when writing Analog Clock for WebAssembly. Instead, I did a bit of Googling, and found a way nicer solution: put -lm after the source files, and not before them. I do not understand how it works, but I can empirically say it does work. You can read the zero9178's explanation for that if you are interested, I do not fully understand it either, and I probably know more about compiler theory than most programmers. And when writing a shell script to download, compile and run Analog Clock in AEC for WebAssembly, I realized that the code my AEC-to-WebAssembly compiler outputs works only on NodeJS 11 and newer, because it relies on WebAssembly global variables. So, I decided to warn the user if they have installed NodeJS that's older than NodeJS 11. So, I wrote node_version=$(node -v) to store the NodeJS version string into a variable called node_version, so that I can extract the first number from it and act accordingly. That worked on Linux, but on Windows NodeJS outputted the error message "stdout is not a tty" instead of outputting the version string. I can't think of a rationalist work-around. But I was empirical, so I posted a question on StackOverflow, and I got the answer: on Windows, do node_version=$(node.exe -v). I almost did not try it, as it seemed so ridiculous. However, I was empirical enough that I tried it, and it somehow magically worked. I still have no idea how it works. It has something to do with the difference between how terminals work on Linux and how they work on Windows, but I don't understand the details. And like I've said, the fact that you sometimes stumble upon problems with truly mysterious fixes is true not only in shell scripting, but also in other types of programming, such as CSS. Look up the Internet Explorer 6 double-margin bug. Or how, when programming my PicoBlaze Simulator, I ran into a problem that the tooltips I made worked in Firefox but not in Chrome. In both cases, the fix seems so ridiculous that it's not even worth trying. For the Internet Explorer 6 double-margin bug, it was a mysterious bug in Internet Explorer 6. For the Chrome issue I've run into, the people on StackOverflow insist that Chrome is actually obeying the standard, while Firefox isn't. If so, the standard goes wildly against common sense here. Programming is an empirical thing, but universities pretend it's a rationalist thing.). You will gain next-to-no experience with that at the university, as the programming tools used at the university are different from the programming tools used in real life. JavaScript, for example, is taught very little at the university, and it is the most popular programming language these days, and will likely remain so in the future. Not because it is a good language, in fact, it is widely agreed to be an exceptionally poorly designed language, full of quirks which programmers need to spend a lot of time learning to use it effectively. It is the most popular programming language because of the technicallity that, in order to make your application run in an Internet browser, for most of the time the Internet has existed, there was no alternative. WebAssembly will replace a part of JavaScript, but probably not most of it. At the university, you will gain a lot of experience with programming languages such as MatLab, which is almost never used for software development in the real world, and is also very different from the languages used in the real world. My perception is that the knowledge that is gained at the university helps only when dealing with stuff such as nuclear reactors or medical devices. In those cases, it is useful to be able to make academic arguments that your program will work correctly in unexpected situations. In most other cases, though, knowledge that is gained at the university is not useful.
A common response given by professors on the university to this "Knowledge taught at the university is almost never useful." argument is something along the lines of "You will indeed only rarely need the stuff you learn here, but, unless you are taught them, you will not recognize when you need them.". The problem with that response that the same is true for most things in programming (and even for most things in life), and not only for algorithms and data structures and other things taught at the university. When I was designing this web-site back when I was a high-school student, I needed advanced CSS (CSS queries...), but I did not recognize that I needed it. Instead of learning advanced CSS, I did a lot of browser sniffing and other bad things in JavaScript. Knowing advanced CSS would save me a lot of work and give me a superior result. But I did not recognize that I needed it. Just like, when implementing suggestions for misspelled variable names in my AEC-to-WebAssembly compiler, I did not recognize I needed Levenshtein Distance (which, although it is a useful algorithm, we haven't been taught it in our Algorithms and Data Structures classes) and I instead used Longest Common Subsequence (which we also haven't been taught at the university, but I nevertheless happened to know it), which gave me significantly worse results. And who knows how many other things in programming I needed, but I did not recognize I needed them?
It is important to understand that while some technological advancements turn out to be life-saving medical technology or some other tool that obviously improves our quality of life, the vast majority of technological advancements are not like that. Like Ayn Rand said, most of the technological advancements are solving problems that technologies, that we became dependent on, created. Ayn Rand was referring to cars, but the same can be said for computers. In 1983, the engineers at Altera invented field-programmable gate arrays (FPGAs), that's intented to be useful as a CPU but whose architecture (whether it is x86 or ARM...) can be changed programmatically. Of course, FPGAs were nearly useless back then, as there was no high-level programming language targetting them. Then the engineers working at the US government invented VHDL, a programming language that can be used to target FPGAs. But FPGAs were still relatively useless as, well, no actual CPU can be synthesized to work on FPGAs. But then came the engineers at Xilinx who wrote PicoBlaze. While it solved some problems, it created new ones. To search for errors in programs written for PicoBlaze, it is useful to be able to run your programs on the computer you are developing that program on, which is only possible using an emulator. Furthermore, the assembler for PicoBlaze that Xilinx produces is, because of some technical details, difficult to run on various types of computers. So, my Computer Architecture professor Ivan Aleksi pointed out that problem to me and he suggested me to make a web-based assembler and emulator for PicoBlaze. And, when I made it, I created new problems: that emulator is slow and may even be incorrect in some cases (What happens to the flags when the regbank changes?). And Xilinx'es creation of PicoBlaze also created another problem: no compiler for a high-level language can actually target PicoBlaze. And that is difficult to solve, as PicoBlaze is a very difficult compilation target (you can trust me on that, as I made a two compilers for my programming language, but I have no idea how I would make one to target PicoBlaze). Aren't we trying to get out of a hole by digging? On what planet does that make sense? If you don't like completing such seemingly-pointless tasks, then don't be a programmer, as that's what programming is mostly about. My father, when I told him I think I should be a programmer who hasn't gone to a university, he told me: "You need to understand you will not be working on the abstract idea of programming. You will not be programming Jesus and Mary in the air, you will be programming actual devices which you need to understand how they work. And the university will help you understand how they work.". It seemed to me back then that it could be true, but it seems to me more and more that that couldn't be further from the truth. It seems to me now that most programmers are working on the abstract idea of programming, rather than programming something in reality.
If you would like to read more about what I think about the way programming is taught at the university, click here.
OK, now we can continue...
So, why am I interested in informatics? Becuase it works! I see that if I write something in one of the so-called programming languages, a computer understands me. Sometimes it's hard to understand a computer, and it's often hard to make yourself understood by a computer, but that's because computers are different from human beings. They can easily do things human beings have no hope of doing, like displaying animations (which is basically drawing tens of images per second). Human brains are powerful computers, but they are also very specialized to solve only certain kinds of tasks (that came useful during the evolution). That's why what seems simple to us has little correlation with what is actually simple to a computer. That is well-known as the Moravec's Paradox.
So, what are programming languages? Well, see, computers natively understand only machine code, made of ones and zeros. It's very hard for us to understand the language of ones and zeros. For similar reasons, computers have a very hard time understanding the human languages. That's why we needed to invent some special languages both humans and computers could understand. These are called programming languages. There are programs that translate the programming languages to ones and zeros, these are called compilers and interpreters.
So, what are programming languages like? Well, there are two basic types of programming languages. One are the so-called imperative languages, and the others are called declarative languages. An example of an imperative language is C++, and an example of a declarative language is Haskell. Here is an on-line compiler for C++, and here is an on-line interpreter for Haskell. In declarative languages, the sentences would mostly translate to human languages as strict mathematical definitions, and in imperative languages, they would mostly translate as imperatives.
To explain the difference between the programming languages, I will use the following example of a simple program. Leonardo from Pisa, also known as Fibonacci, was a mathematician who introduced the Arabic numerals to Europe. He lived in the 12th and the 13th century. He has worked on many natural sciences. One of the questions he asked himself was how fast woud rabbits procreate if there was enough food for every single of them. So he did some experiments. What he found out was that there was indeed a rule. Namely, the number of rabbits in some generation is equal to the sum of the numbers of the rabbits in the previous two generations. For instance, if there are three rabbits in the current generation, and there had been two rabbits in the previous generation, there will be five rabbits in the next generation. From then on, the sequence of the numbers in which each one is equal to the some of the previous two is called the Fibonacci sequence. The zeroth number in that sequence is defined to be zero, and the first one to be one. So, that sequence goes like this: 0,1,1,2,3,5,8,13,21... We want to make a program to find some number that's far in that sequence (ignoring the obvious fact that that number would be far larger than the actual number of rabbits in nature because, well, once there are many of them, some of them will die before they procreate either because of the predators or the starvation).
So, how will we do it in Haskell? We just need to translate a strict mathematical definition of the Fibonacci's sequence to it. "Fibonacci's sequence is a sequence of integers (whole numbers). The zeroth number in that sequence is zero. The first one is one. For every other number, its Fibonacci's number is equal to the sum of the two right before it." Here we go:
Now, how will we translate that to C++? We can't do it literally. We need to make an algorithm, a sequence of instructions a computer has to follow in order to calculate it. A concept you probably need to understand for that is called variables. Variables are readable and writable places in memory a symbol is assigned to. They can store various pieces of information, in this case, they will store whole numbers. So, in C++, when you say int a; that means "create a variable that stores integers (int means integer) and assign it the symbol 'a'." (the semicolon, in this case, marks the end of a sentence). Now, if you say a=5;, that means "Store the number 5 in the variable 'a'." If, after that, you say a=a+5, that means "Store the number a+5=5+5 (since we previously stored the number 5 into 'a')=10 into 'a'". So, what we will do is to make a program that will have two variables, 'a' and 'b'. In the beginning, 'a' will be zero and 'b' will be one. Now, we will add 'a' to 'b', and we will then assign the difference between 'b' and 'a' to 'a'. And we will repeat it 'n' times, the number of the Fibonacci's number we are trying to find. Then we will say that 'a' is the nth Fibonacci's number. Let's say we want to find the third Fibonacci's number. So, in the zeroth step 'a' is 0 and 'b' is '1'. In the first step, b=a+b=0+1=1, and a=b-a=1-0=1. In the second step, b=a+b=1+1=2, and a=b-a=2-1=1. And in the third step, b=a+b=1+2=3 and a=b-a=3-1=2. There we go, we will say that the solution is a=2. In C++, we say that we "return" 'a' (that phrase makes sense once you look deeper into the language). The usual way of saying you want to repeat something 'n' times in C++ is to say something that would literally translate as "For every integer 'i' from zero that's smaller than 'n', increasing 'i' every time by one, do...". Without further ado, here is the code:
Imperative languages are divided into the so-called higher and lower imperative languages. C++ is a higher imperative language. Lower imperative languages are rarely used today. They are hard to understand by a human, but easier to understand by a computer. An example of a lower imperative language is Assembly. It has, unlike Haskell or C++, many dialects. In fact, in general, each Assembly compiler has its own dialect of Assembly. So, an Assembly program that works on Windows doesn't work on Linux even if you have an Assembly compiler for Linux. A dialect of Assembly I am somewhat familiar with is Flat Assembler. Here is what the program would look like in Flat Assembler:
Today, most of the programs are written in higher imperative languages like C++. We've gone an enormously long way from writing the programs in ones and zeros. There are two main streams of attempts to make the programming languages more productive. One is to make declarative languages, and the other is to keep the languages imperative, but to change their grammar to resemble the grammars of human languages more (like the word order usually being subject-verb-object), and that's called object-oriented programming.
The first one appears to be more scientific. It often does the experiments to determine whether a particular feature makes programming languages more productive. But it's hard to tell because this field of informatics, the comparisons of the programming languages, is filled with pseudosciences. Programmers are often quite dogmatic in defending their favorite programming languages.
A question that I sometimes ask myself is why do we still use low-level programming languages for some tasks? A few decades ago, you could say that compilers were not advanced enough to target resource-constrained computers, but today you cannot say that. C++ compilers today often produce better code than an inexperienced assembly language programmer would write. I think a lot better explanation is that, when you are doing simple tasks on resource-constrained computers such as PicoBlaze, the benefits of using a high-level language are questionable. For one thing, you probably need to learn another high-level programming language, since PicoBlaze is not at all well-suited for being targetted with C. For one thing, C assumes it is easy to do floating-point operations (operations with decimal numbers), whereas that's not at all true for PicoBlaze (it may very well be impossible on PicoBlaze). So, to program PicoBlaze in a high-level language, you will need to learn some relatively-alien high-level programming language. And the compiler for that language... hm... you cannot be certain it will be nearly as good as mainstream C++ compilers are. Furthermore, I think that any real high-level language makes the limits of the hardware far less transparent. PicoBlaze guarantees you that each directive will run in exactly two clock ticks, so its timing is completely predictable if you use assembly. If you use a high-level language, that benefit is lost. So is the insight that PicoBlaze can only have 16 bytes of local variables at once. And so on. Overall, I think that assembly language programming will always be a useful skill. Furthermore, you need to realize that, in embedded systems (for which PicoBlaze is used), you are rarely using some complicated algorithms. Quite often, a P-controller (which simply multiplies the error by a constant) is enough to get a sane result. The constant of the P-controller is not calculated on PicoBlaze, it is calculated in programs such as Octave and MatLab, and only programmed into PicoBlaze. Sometimes PI-controller (basically, that the sum of previous errors affects the output in addition to the current error) is necessary, but that also isn't horribly difficult to implement in assembly. Sometimes some dynamic programming is necessary, but the code for dynamic programming algorithms is usually short and easy to implement in assembly (easier than QuickSort at least), even though it is hard to understand how those algorithms work (or even what exactly they are doing: for a long time, I was mistaken that the LCS algorithm from dynamic programming is good for providing suggestions for misspelled variable names). Programming for embedded systems is a very different kind of programming than programming my PacMan game or programming the compiler for my programming language or my PicoBlaze assembler and emulator, it takes completely different skills, and the benefits of using a high-level language there are questionable.
While I have a lot of theoretical knowledge of programming, I don't have experience with writing long programs. The most complicated thing I've made is probably the PacMan game I've posted on this site. It's written mostly in Javascript, it's around550 1300 lines long (it
used to be 550 lines long, but I have added many new things to it over
the years), and I had to solve a lot of algorithmic problems. (UPDATE on 08/05/2021: As of now, the most complicated program I have
made is the compiler for
my programming language
targetting WebAssembly, being written in C++ and having 5'500 lines of
code, excluding the testing parts written in JavaScript and the
example programs written in my programming language. The most
complicated program I have made in HTML5 is my
PicoBlaze assembler and emulator, having 3'500 lines of code.) I've come to an idea to make this website when I learned how much
Javascript (a programming language used by Internet browsers) has
changed since I last studied it. It's made making websites, web
applications and games much easier.
Everything on this website, including the animations and the game, is hand-written in HTML5 (a common name for CSS, Javascript and HTML). I haven't used any special web-designing tools nor frameworks. Looking at the source code of this website might help you study the HTML5, especially since I am still relatively a beginner (not knowing the "dirty tricks").
I hope that it will have some educational value. If you like the way I designed this website, you can make one that looks similar to mine by downloading the template I've designed here (I'll warn you that you will probably want to dodge it to work better in Safari on iPhone, I haven't bothered to make all the features available in a browser full of quirks and without the developer tools allowing me to explore them).
UPDATE on 10/02/2018: I've just made a simple arithmetic-expression-to-assembly compiler in JavaScript, runnable in a browser. I hope that playing with it will be useful in understanding how the programming languages work. If you are going to experiment with Assembly, you should probably use some virtualization software to protect the critical software on your computer from the damage poorly-written or malicious Assembly programs can do. You can read about my experience with free virtualization software here. (The back-end of that compiler can't work since this web-site is hosted on GitHub Pages now, and GitHub Pages doesn't support any PHP!)
UPDATE on 27/09/2019: I've just published two YouTube videos explaining why I think Donald Trump was wrong to ban Huawei, and why I think the new European Union Copyright directives won't be a big deal to the freedom of the Internet. You can see the video about the Huawei ban here (in case you have trouble playing it, try this and this), and you can see the video about Article 11 and Article 13 here (in case you have trouble playing that, try this and this).
UPDATE on 03/11/2019: I've made an example of how to implement the QuickSort algorithm (slightly modified to be easier to implement in a very low-level-language, but not much slower than the traditional QuickSort) in the programming language I made, with comments in Croatian. You can see that here.
UPDATE on 20/11/2019: I've just published a YouTube video showing how you can set up a modern computer to be able to program it in your own programming language.
UPDATE on 23/12/2019: I've just written a seminar in Croatian about my implementation of QuickSort in my programming language, you can download it here (it's available in many different formats, so you almost certainly don't need to install any software in order to read it). If you really can't open any of those files, try this one (the formatting is greatly distorted).
UPDATE on 03/05/2020: I've just made a program in C++ that converts musical notes written in a text file into a simple binary format that can be played by some programs (and converted into mainstream formats by programs that come with Linux), to study how sound is represented in a computer. You can see it, along with hearing an example song, here.
UPDATE on 14/05/2020: I've just made a program that will graphically present Huffman encoding (a primitive form of data compression), you can see it here. I haven't bothered to make it work in old browsers. (UPDATE: I've made it work in Internet Explorer 11, it was easier than I expected. Still, it works in fewer browsers than the PacMan game does, because I relied on advanced JavaScript syntax to describe the algorithm, and there is no obvious way to do it in old JavaScript.)
UPDATE on 23/05/2020: I've just made a program that calculates the properties of the distribution of the numbers in the multiplication table. I don't know how this mathematical distribution is called. It won't work in older browsers, and it will be very hard to make it work there.
UPDATE on 08/08/2020: The Arithmetic Expression Compiler language can now be used to target the JavaScript Virtual Machine using WebAssembly (the textual representation of JavaScript bytecode, which Mozilla has been pushing to get standardized). An example of that is the implementation of the permutation algorithm written in the Arithmetic Expression Compiler language and runnable in modern browsers.
UPDATE on 22/08/2020: I've started a Reddit thread about my programming language. (UPDATE on 18/12/2020: As well, I have written an informal specification of that language.)
UPDATE on 20/11/2020: As a part of a school project, I've written an assembler and a simulator for PicoBlaze (a small computer we use in laboratory exercises in our Computer Architecture classes) in JavaScript, which can be run in a modern browser (relatively modern ones, Internet Explorer 11 does not qualify, but some versions of Microsoft Edge which cannot run programs written in my programming language nevertheless can run that simulator and assembler, and so can Firefox 52, the last version of Firefox runnable on Windows XP). You can see it here.
UPDATE on 07/04/2022: I have made a video debunking Tony Heller's claims about the election fraud. However, YouTube refuses to let me upload it there, so I have uploaded it on GitHub Pages. My best guess as to why it cannot be uploaded is that YouTube's Artificial Intelligence thinks I am claiming the election fraud. Nothing could be further from the truth, I am critical of claiming such a thing. But that's how censorship using artificial intelligence works.
UPDATE on 05/01/2023: A question I often get asked on Internet forums is, if I have made my programming language, why haven't I also made my own operating system? The answer is fairly simple: While I do have some ideas about what a good programming language would look like and work internally (as you can probably tell by reading the documentation of my programming language), I have no idea what a good operating system would work like. So, I haven't made my own operating system, and I probably never will.
UPDATE on 25/02/2023: A few months ago, I published a paper in Valpovački Godišnjak which applies informatics to linguistics (the names of places in Croatia). It is mentioned in Glas Slavonije. It is basically this text, just a slightly different version.
UPDATE on 07/09/2023: Our control engineering professor Dražen Slišković posted on Moodle the questions he will ask us at the oral exam. So, I started writing answers to those questions. I am writing the mathematical formulas there using the MathJAX JavaScript framework.
UPDATE on 09/08/2024: I've written a blog-post about the problems I've run into with CSS over the years.
UPDATE on 01/09/2024: I've published a YouTube video explaining why I regret studying computer engineering (MP4).
Before we begin, let me make it clear I am not one of those people who are saying programming is an easy way to make money. It is not, I will share my Reddit post about it here:
Don't expect it to be so easy. I have been trying to learn to program
for 8 years now and I still have not managed to get an entry-level
job. This applies to many things in life: beware of the survivorship
bias. Don't listen just to the success stories, listen also to those
who did not succeed. If you listen only to the success stories, you
will have a very mistaken picture of reality.
I have spent much of my time preparing for programming competitions such as Infokup (back in 2013, I won the 4th place on the Croatia-level Infokup algorithmic competition, and, in 2014, I won the 6th place on that same competition) and the Croatian Open Competition in Informatics (back in 2016, I won the 15th place in the Croatia-level algorithmic competition called HONI. And, in 2019, my team won the 7th place on the STEM Games algorithmic competition.). I thought they will make me a competent programmer. Not only did they not make me a competent programmer, they might even have made me worse (as they encourage bad programming practices, such as short variable names, and give people bad instincts that, if a program is slow, it is usually because of the algorithm). I also thought that, once I build a compiler for my programming language targeting WebAssembly, any employer would be impressed. So I spent a lot of my time learning about relevant things and building it. As it turns out, no employer is impressed by such things these days. (I once learned that a Serbian company called RT-RK, that also has an office in Osijek, where I was living at the time, was searching for a compiler developer. So I sent them via e-mail the AEC-to-WebAssembly compiler on GitHub. They did not even invite me for an interview. They responded me in an e-mail that they are searching for somebody who knows in details how GCC or LLVM, preferably both, work internally, and that my project does not show them that I know that.) The barrier to entry to the world of programming is high, and it gets even higher as the time passes (and computers become more complicated, and an average person is more able to build a website, so more is expected from a programmer...).
I am sorry if I bursted your bubble, but a little slap of reality into your face will save you from a lot of pain later.
In short, I think that "Learn to code!" has become the
new "Let them eat cake!". Especially since it is often told to people who had lost their
jobs and have mouths to feed. Maybe it was a sound advice in the late
1990s and early 2000s when very little was expected from a web-developer
(when everybody who knew HTML, CSS and maybe basic JavaScript could get
a job as a front-end developer), but those days are gone.I have spent much of my time preparing for programming competitions such as Infokup (back in 2013, I won the 4th place on the Croatia-level Infokup algorithmic competition, and, in 2014, I won the 6th place on that same competition) and the Croatian Open Competition in Informatics (back in 2016, I won the 15th place in the Croatia-level algorithmic competition called HONI. And, in 2019, my team won the 7th place on the STEM Games algorithmic competition.). I thought they will make me a competent programmer. Not only did they not make me a competent programmer, they might even have made me worse (as they encourage bad programming practices, such as short variable names, and give people bad instincts that, if a program is slow, it is usually because of the algorithm). I also thought that, once I build a compiler for my programming language targeting WebAssembly, any employer would be impressed. So I spent a lot of my time learning about relevant things and building it. As it turns out, no employer is impressed by such things these days. (I once learned that a Serbian company called RT-RK, that also has an office in Osijek, where I was living at the time, was searching for a compiler developer. So I sent them via e-mail the AEC-to-WebAssembly compiler on GitHub. They did not even invite me for an interview. They responded me in an e-mail that they are searching for somebody who knows in details how GCC or LLVM, preferably both, work internally, and that my project does not show them that I know that.) The barrier to entry to the world of programming is high, and it gets even higher as the time passes (and computers become more complicated, and an average person is more able to build a website, so more is expected from a programmer...).
I am sorry if I bursted your bubble, but a little slap of reality into your face will save you from a lot of pain later.
And do not expect universities to help you with programming significantly. (UPDATE on 01/01/2024: I am quite sure that, now I have graduated, I am actually less capable of being a useful programmer than I was back in 2018, when I started studying computer science at a university. Back in 2018, I didn't have a psychotic disorder and I had a lot more enthusiasm. And I am quite sure the university is at least partly responsible for me getting a psychotic disorder and losing enthusiasm. Even if my psychotic disorder was caused by me regularly taking Paracetamol and energy drinks (rather than the stress at the university), as my psychiatrist thinks... Would that have happened if I wasn't studying at a university or if I was studying something easier than computer science? I don't think so.) Let me tell you an anecdote from my experience studying computer science at the university: During the summer break, my father asked me which courses I have the next semester. I was naming the courses, and, when I said "object-oriented programming", my father interrupted me and said "How? Object-oriented programming? A really weird name. And, is there then some subject-oriented programming?" I said that, as far as I know, there isn't (only later did I find out that subject-oriented programming is indeed a thing). Then my father said: "I guess that's something that we historians can't understand. No, that, on Croatian language, that's not a good name.". After a few weeks, we met with some old friend of his. And my father told me: "So, tell him, what's the name of the course you have this semester.". So, I repeated: "object-oriented programming". And then my father asked him: "So, what does that name mean? Can you guess? Well, can you think of a name that's more stupid?". And the friend of my father said: "Well, I guess it's called object-oriented because programming is usually done by mathematicians and people from natural sciences. If programming were done by historians or poets, then it would be called subject-oriented programming.". I hope you get some idea how difficult it is to study computer science at the university. And it is important to understand that programming which is done at the university has little to do with programming in real life. Studying computer science at the university will familiarize you with computer science, electrical engineering (If you do not know what is electrical engineering, here is a quote from my professor Željko Hederić that I think wonderfully illustrates that: "When you try to spill water from a glass, the water will not start spilling all until some air gets into the glass, do we agree? Similarly, the electricity will not start flowing from a socket all until some magnetic field does not get into that socket. And that is, basically, what the Biot-Savart Law is saying.") and advanced mathematics (Much of the advanced mathematics is things you already know but in a very confusing language. Here is a joke I have written about it in Latin, based on a true story: Hodie in universitate (ego studeo scientiam computorum) docebamur de theoria unionum. Professor nobis explicabat, cur numerus cardinalis unionis unionum non semper sit summa (additio) cardinalum numerorum unionum: "Si hoc veritas esset, canis debet octo crura habere. Canis enim habet duo crura antica, duo crura posteriora, duo crura laeva, et duo crura dextera. Summa (additio) numerorum cardinalium earum unionum octo (quater bini) est, sed numerus cardinalis unionis earum unionum, sane, quattuor est.". And the rest of the university mathematics are some very difficult and rarely-useful things that can make your engineering marginally better. Take a look at the history of telephones. As far as I know, the only part of telephones where university-level mathematics is used is tone dialing, which uses Discrete Fourier Transform. And telephones functioned well before that using pulse dialing, university-level mathematics only made them marginally better.). By the way, professors will often be angry, and with an understandable reason. In real life, programming rarely involves advanced knowledge of even computer science (I think there are only two times in my projects where knowledge of computer science helped me significantly, when I used DFS algorithm to avoid stack overflow in my AEC-to-x86 compiler and when I used LCS algorithm from Dynamic Programming in my AEC-to-WebAssembly compiler to provide corrections for misspelled variable names; It is also possible this little knowledge of computer science that I have has guided me astray multiple times, as somebody on Discord suggested me a better solution for providing corrections for misspelled variable names than using LCS.), yet alone electrical engineering (I think my knowledge of electrical engineering never helped me with my projects) or mathematics (I think the only time my knowledge of advanced mathematics helped me is when implementing mathematical functions into my AEC-to-WebAssembly compiler. And note that, had I made that compiler properly, by targetting WebAssembly via LLVM instead of targetting WebAssembly directly, I would not have to do that.). When you program in the real world, you will probably spend most of your time with things such as getting your web-app to work in Internet Explorer, or something equivalent to that in parts of programming not related to web-apps (The thing that bothers me with making the compiler for my programming language right now is that, after I added the suggestions for misspelled variable names, the compiler crashes if it is compiled with Visual Studio or CLANG with some options on Windows, but apparently not if it is compiled using any other C++ compiler. What you will spend most of your time dealing with when programming are those annoying little things about programming tools that have little or nothing to do with the problem you are trying to solve. While stuff in programming languages such as portability, compiler warnings and exceptions are good things, you need to understand that, quite often, they are illusory and lulling programmers into false sense of security. A program that compiles and works in one C++ compiler can very well not even compile in another one, yet alone work. And it is also like that with Java. Java is supposed to be a compile-once-run-everywhere language, but a lot better description is compile-once-debug-everywhere. In theory, if you are triggering undefined behaviour, the C++ compilers should give you a warning. In reality, often enough for that to be a problem, none of them will end up warning you, as has happened in my case. In theory, if your program is misusing the C++ standard library, the C++ standard library should throw an exception, and that is what exceptions in programming languages are for. In reality, unless you know how the C++ standard library works in the smallest details, what will sooner or later happen is that your program will appear to work except for sometimes unpredictably having segmentation faults under one compiler, on an operating system under which the debugging tools you know how to use are not working. And there are reasons why C++ compilers and standard library are so permissive. The first reason is that there is a bunch of bad code already present in C++ projects, and compilers which complain about them will be perceived as faulty. I know how annoyed I feel when I try to build from source an older open-source C++ project with a modern C++ compiler and get tons of error messages, while using an older C++ compiler works. There is, unfortunately, an incentive not to fix ages-old bugs in programming languages and programming tools. The second reason is that C++ compilers and standard libraries need to make trade-offs between catching errors and being fast enough for correct programs that need high performance and fast compilation times. These are problems with computers that have nothing to do with computer science, yet alone mathematics or engineering, but which programmers need to deal with every day. Oh, and understand that you will sometimes have to apply fixes which you have no idea how they work. You will get a lot further by being empirical than by being a rationalist and only relying on things you understand. This is especially true when writing shell scripts, although it's also somewhat true in other types of programming. When writing a shell script to download, compile and run Analog Clock in AEC for x86, I ran into a problem that, on recent versions of Debian Linux, the linker insists that -lm, the option for linking in the math library, is put after the source files, and it outputs some confusing error message if it's put before the source files. A rationalist solution would be to try to implement the math functions that Duktape invokes yourself, like I've implemented them in my programming language when writing Analog Clock for WebAssembly. Instead, I did a bit of Googling, and found a way nicer solution: put -lm after the source files, and not before them. I do not understand how it works, but I can empirically say it does work. You can read the zero9178's explanation for that if you are interested, I do not fully understand it either, and I probably know more about compiler theory than most programmers. And when writing a shell script to download, compile and run Analog Clock in AEC for WebAssembly, I realized that the code my AEC-to-WebAssembly compiler outputs works only on NodeJS 11 and newer, because it relies on WebAssembly global variables. So, I decided to warn the user if they have installed NodeJS that's older than NodeJS 11. So, I wrote node_version=$(node -v) to store the NodeJS version string into a variable called node_version, so that I can extract the first number from it and act accordingly. That worked on Linux, but on Windows NodeJS outputted the error message "stdout is not a tty" instead of outputting the version string. I can't think of a rationalist work-around. But I was empirical, so I posted a question on StackOverflow, and I got the answer: on Windows, do node_version=$(node.exe -v). I almost did not try it, as it seemed so ridiculous. However, I was empirical enough that I tried it, and it somehow magically worked. I still have no idea how it works. It has something to do with the difference between how terminals work on Linux and how they work on Windows, but I don't understand the details. And like I've said, the fact that you sometimes stumble upon problems with truly mysterious fixes is true not only in shell scripting, but also in other types of programming, such as CSS. Look up the Internet Explorer 6 double-margin bug. Or how, when programming my PicoBlaze Simulator, I ran into a problem that the tooltips I made worked in Firefox but not in Chrome. In both cases, the fix seems so ridiculous that it's not even worth trying. For the Internet Explorer 6 double-margin bug, it was a mysterious bug in Internet Explorer 6. For the Chrome issue I've run into, the people on StackOverflow insist that Chrome is actually obeying the standard, while Firefox isn't. If so, the standard goes wildly against common sense here. Programming is an empirical thing, but universities pretend it's a rationalist thing.). You will gain next-to-no experience with that at the university, as the programming tools used at the university are different from the programming tools used in real life. JavaScript, for example, is taught very little at the university, and it is the most popular programming language these days, and will likely remain so in the future. Not because it is a good language, in fact, it is widely agreed to be an exceptionally poorly designed language, full of quirks which programmers need to spend a lot of time learning to use it effectively. It is the most popular programming language because of the technicallity that, in order to make your application run in an Internet browser, for most of the time the Internet has existed, there was no alternative. WebAssembly will replace a part of JavaScript, but probably not most of it. At the university, you will gain a lot of experience with programming languages such as MatLab, which is almost never used for software development in the real world, and is also very different from the languages used in the real world. My perception is that the knowledge that is gained at the university helps only when dealing with stuff such as nuclear reactors or medical devices. In those cases, it is useful to be able to make academic arguments that your program will work correctly in unexpected situations. In most other cases, though, knowledge that is gained at the university is not useful.
A common response given by professors on the university to this "Knowledge taught at the university is almost never useful." argument is something along the lines of "You will indeed only rarely need the stuff you learn here, but, unless you are taught them, you will not recognize when you need them.". The problem with that response that the same is true for most things in programming (and even for most things in life), and not only for algorithms and data structures and other things taught at the university. When I was designing this web-site back when I was a high-school student, I needed advanced CSS (CSS queries...), but I did not recognize that I needed it. Instead of learning advanced CSS, I did a lot of browser sniffing and other bad things in JavaScript. Knowing advanced CSS would save me a lot of work and give me a superior result. But I did not recognize that I needed it. Just like, when implementing suggestions for misspelled variable names in my AEC-to-WebAssembly compiler, I did not recognize I needed Levenshtein Distance (which, although it is a useful algorithm, we haven't been taught it in our Algorithms and Data Structures classes) and I instead used Longest Common Subsequence (which we also haven't been taught at the university, but I nevertheless happened to know it), which gave me significantly worse results. And who knows how many other things in programming I needed, but I did not recognize I needed them?
It is important to understand that while some technological advancements turn out to be life-saving medical technology or some other tool that obviously improves our quality of life, the vast majority of technological advancements are not like that. Like Ayn Rand said, most of the technological advancements are solving problems that technologies, that we became dependent on, created. Ayn Rand was referring to cars, but the same can be said for computers. In 1983, the engineers at Altera invented field-programmable gate arrays (FPGAs), that's intented to be useful as a CPU but whose architecture (whether it is x86 or ARM...) can be changed programmatically. Of course, FPGAs were nearly useless back then, as there was no high-level programming language targetting them. Then the engineers working at the US government invented VHDL, a programming language that can be used to target FPGAs. But FPGAs were still relatively useless as, well, no actual CPU can be synthesized to work on FPGAs. But then came the engineers at Xilinx who wrote PicoBlaze. While it solved some problems, it created new ones. To search for errors in programs written for PicoBlaze, it is useful to be able to run your programs on the computer you are developing that program on, which is only possible using an emulator. Furthermore, the assembler for PicoBlaze that Xilinx produces is, because of some technical details, difficult to run on various types of computers. So, my Computer Architecture professor Ivan Aleksi pointed out that problem to me and he suggested me to make a web-based assembler and emulator for PicoBlaze. And, when I made it, I created new problems: that emulator is slow and may even be incorrect in some cases (What happens to the flags when the regbank changes?). And Xilinx'es creation of PicoBlaze also created another problem: no compiler for a high-level language can actually target PicoBlaze. And that is difficult to solve, as PicoBlaze is a very difficult compilation target (you can trust me on that, as I made a two compilers for my programming language, but I have no idea how I would make one to target PicoBlaze). Aren't we trying to get out of a hole by digging? On what planet does that make sense? If you don't like completing such seemingly-pointless tasks, then don't be a programmer, as that's what programming is mostly about. My father, when I told him I think I should be a programmer who hasn't gone to a university, he told me: "You need to understand you will not be working on the abstract idea of programming. You will not be programming Jesus and Mary in the air, you will be programming actual devices which you need to understand how they work. And the university will help you understand how they work.". It seemed to me back then that it could be true, but it seems to me more and more that that couldn't be further from the truth. It seems to me now that most programmers are working on the abstract idea of programming, rather than programming something in reality.
If you would like to read more about what I think about the way programming is taught at the university, click here.
OK, now we can continue...
So, why am I interested in informatics? Becuase it works! I see that if I write something in one of the so-called programming languages, a computer understands me. Sometimes it's hard to understand a computer, and it's often hard to make yourself understood by a computer, but that's because computers are different from human beings. They can easily do things human beings have no hope of doing, like displaying animations (which is basically drawing tens of images per second). Human brains are powerful computers, but they are also very specialized to solve only certain kinds of tasks (that came useful during the evolution). That's why what seems simple to us has little correlation with what is actually simple to a computer. That is well-known as the Moravec's Paradox.
So, what are programming languages? Well, see, computers natively understand only machine code, made of ones and zeros. It's very hard for us to understand the language of ones and zeros. For similar reasons, computers have a very hard time understanding the human languages. That's why we needed to invent some special languages both humans and computers could understand. These are called programming languages. There are programs that translate the programming languages to ones and zeros, these are called compilers and interpreters.
So, what are programming languages like? Well, there are two basic types of programming languages. One are the so-called imperative languages, and the others are called declarative languages. An example of an imperative language is C++, and an example of a declarative language is Haskell. Here is an on-line compiler for C++, and here is an on-line interpreter for Haskell. In declarative languages, the sentences would mostly translate to human languages as strict mathematical definitions, and in imperative languages, they would mostly translate as imperatives.
To explain the difference between the programming languages, I will use the following example of a simple program. Leonardo from Pisa, also known as Fibonacci, was a mathematician who introduced the Arabic numerals to Europe. He lived in the 12th and the 13th century. He has worked on many natural sciences. One of the questions he asked himself was how fast woud rabbits procreate if there was enough food for every single of them. So he did some experiments. What he found out was that there was indeed a rule. Namely, the number of rabbits in some generation is equal to the sum of the numbers of the rabbits in the previous two generations. For instance, if there are three rabbits in the current generation, and there had been two rabbits in the previous generation, there will be five rabbits in the next generation. From then on, the sequence of the numbers in which each one is equal to the some of the previous two is called the Fibonacci sequence. The zeroth number in that sequence is defined to be zero, and the first one to be one. So, that sequence goes like this: 0,1,1,2,3,5,8,13,21... We want to make a program to find some number that's far in that sequence (ignoring the obvious fact that that number would be far larger than the actual number of rabbits in nature because, well, once there are many of them, some of them will die before they procreate either because of the predators or the starvation).
So, how will we do it in Haskell? We just need to translate a strict mathematical definition of the Fibonacci's sequence to it. "Fibonacci's sequence is a sequence of integers (whole numbers). The zeroth number in that sequence is zero. The first one is one. For every other number, its Fibonacci's number is equal to the sum of the two right before it." Here we go:
fibonacci :: Integer -> Integer fibonacci 0 = 0 fibonacci 1 = 1 fibonacci n = fibonacci (n-1) + fibonacci (n-2)If you study the code (a set of sentences in a programming language), I believe it will become clear that it's a literal translation of the four sentences, every in their own row.
Now, how will we translate that to C++? We can't do it literally. We need to make an algorithm, a sequence of instructions a computer has to follow in order to calculate it. A concept you probably need to understand for that is called variables. Variables are readable and writable places in memory a symbol is assigned to. They can store various pieces of information, in this case, they will store whole numbers. So, in C++, when you say int a; that means "create a variable that stores integers (int means integer) and assign it the symbol 'a'." (the semicolon, in this case, marks the end of a sentence). Now, if you say a=5;, that means "Store the number 5 in the variable 'a'." If, after that, you say a=a+5, that means "Store the number a+5=5+5 (since we previously stored the number 5 into 'a')=10 into 'a'". So, what we will do is to make a program that will have two variables, 'a' and 'b'. In the beginning, 'a' will be zero and 'b' will be one. Now, we will add 'a' to 'b', and we will then assign the difference between 'b' and 'a' to 'a'. And we will repeat it 'n' times, the number of the Fibonacci's number we are trying to find. Then we will say that 'a' is the nth Fibonacci's number. Let's say we want to find the third Fibonacci's number. So, in the zeroth step 'a' is 0 and 'b' is '1'. In the first step, b=a+b=0+1=1, and a=b-a=1-0=1. In the second step, b=a+b=1+1=2, and a=b-a=2-1=1. And in the third step, b=a+b=1+2=3 and a=b-a=3-1=2. There we go, we will say that the solution is a=2. In C++, we say that we "return" 'a' (that phrase makes sense once you look deeper into the language). The usual way of saying you want to repeat something 'n' times in C++ is to say something that would literally translate as "For every integer 'i' from zero that's smaller than 'n', increasing 'i' every time by one, do...". Without further ado, here is the code:
int fibonacci(int n) { int a=0, b=1; for (int i=0; i<n; i=i+1) { b=a+b; a=b-a; } return a; }A bit puzzling? Well, see, C++ is actually way easier to understand by a computer than Haskell is. Also, it gives the programmers more control over their programs. In Haskell, they tell the computer what to do, and in C++, they tell it how to do that. So, they can ensure they do it in an efficient way. Today, you still can't trust the compiler to do it for you. Imperative languages are also more commonly used than declarative languages simply because declarative languages feel alien to most programmers. When programming in declarative languages, you cannot do things the way you are used to doing them. Or, more commonly, you can, but it is discouraged (Haskell has a for-loop, but it is discouraged to use it because it is not idiomatic). An unfortunate but undeniable truth is that, if you designed a perfect programming language (much easier to understand both by humans and by computers than modern programming languages are), programmers would refuse to use it because it would feel too alien. In case there is any confusion, I am not claiming that my programming language is anything close to perfect, in fact, it is designed much more to be familiar than to implement innovative ideas.
Imperative languages are divided into the so-called higher and lower imperative languages. C++ is a higher imperative language. Lower imperative languages are rarely used today. They are hard to understand by a human, but easier to understand by a computer. An example of a lower imperative language is Assembly. It has, unlike Haskell or C++, many dialects. In fact, in general, each Assembly compiler has its own dialect of Assembly. So, an Assembly program that works on Windows doesn't work on Linux even if you have an Assembly compiler for Linux. A dialect of Assembly I am somewhat familiar with is Flat Assembler. Here is what the program would look like in Flat Assembler:
.global fibonacci fibonacci: mov eax,0 mov ebx,1 mov ecx, edi loop1: xchg eax,ebx add eax,ebx loop loop1 retAs you've probably guessed, this is not a literal translation from C++. That's because it can't be. I can't really explain this program simply. eax, ebx, ecx and edi are the so-called registers. They are like variables, except that they aren't in the memory of a computer, but in the processor. mov eax,0 would translate to C++ as eax=0;, and add eax,ebx would translate to eax=eax+ebx;. xchg eax,ebx has no equivalent in C++, it means "Let eax and ebx exchange the numbers stored in them". For instance if eax was 0 and ebx was 1 before that sentence, after that sentence eax would be 1 and ebx would be 0. The words such as mov, add and xchg are called mnemonics. loop1: creates a symbol for a place in a program called loop1. loop loop1 means "If the number stored in ecx is bigger than 0, turn the execution of the program back to loop1 (so that the two sentences between the loop1: and loop loop1 repeat themselves, creating a loop)." We say that loop jumps to loop1. .global fibonacci tells the compiler (actually a program beside the compiler called "linker") that fibonacci isn't a place where you can "jump" on, but a name of a subprogram. So, before another part of a program starts this subprogram, it should store the number whose the Fibonacci's number it wants in edi (if it wants the fifth Fibonacci's number, it should store 5 in edi), and this subprogram should return the result in eax. I hope I've given you some basic idea what the lower imperative language are like. (UPDATE: I've written a compiler in JavaScript for my own simple programming language. The core of it, capable of compiling arithmetic expressions, can be run in browser here. It produces assembly code you can study if you are interested. Command-line version of my compiler, runnable in the JavaScript engines Rhino and Duktape, can compile some rather complicated programs, such as the sorting algorithm I've made. You can see the assembly code it produces for that here. And if you think that's complicated, just look at the assembly code a professional compiler generates for equivalent code here. By the way, if you want to try yourself at assembly language programming, I have made, as a part of a school project, a PicoBlaze Assembler and Simulator in JavaScript. It can be run in a modern browser, you don't need to install, or even download, anything. The program here is in x86 assembly, so it won't work in that simulator, but you have an equivalent program as an example there.)
Today, most of the programs are written in higher imperative languages like C++. We've gone an enormously long way from writing the programs in ones and zeros. There are two main streams of attempts to make the programming languages more productive. One is to make declarative languages, and the other is to keep the languages imperative, but to change their grammar to resemble the grammars of human languages more (like the word order usually being subject-verb-object), and that's called object-oriented programming.
The first one appears to be more scientific. It often does the experiments to determine whether a particular feature makes programming languages more productive. But it's hard to tell because this field of informatics, the comparisons of the programming languages, is filled with pseudosciences. Programmers are often quite dogmatic in defending their favorite programming languages.
A question that I sometimes ask myself is why do we still use low-level programming languages for some tasks? A few decades ago, you could say that compilers were not advanced enough to target resource-constrained computers, but today you cannot say that. C++ compilers today often produce better code than an inexperienced assembly language programmer would write. I think a lot better explanation is that, when you are doing simple tasks on resource-constrained computers such as PicoBlaze, the benefits of using a high-level language are questionable. For one thing, you probably need to learn another high-level programming language, since PicoBlaze is not at all well-suited for being targetted with C. For one thing, C assumes it is easy to do floating-point operations (operations with decimal numbers), whereas that's not at all true for PicoBlaze (it may very well be impossible on PicoBlaze). So, to program PicoBlaze in a high-level language, you will need to learn some relatively-alien high-level programming language. And the compiler for that language... hm... you cannot be certain it will be nearly as good as mainstream C++ compilers are. Furthermore, I think that any real high-level language makes the limits of the hardware far less transparent. PicoBlaze guarantees you that each directive will run in exactly two clock ticks, so its timing is completely predictable if you use assembly. If you use a high-level language, that benefit is lost. So is the insight that PicoBlaze can only have 16 bytes of local variables at once. And so on. Overall, I think that assembly language programming will always be a useful skill. Furthermore, you need to realize that, in embedded systems (for which PicoBlaze is used), you are rarely using some complicated algorithms. Quite often, a P-controller (which simply multiplies the error by a constant) is enough to get a sane result. The constant of the P-controller is not calculated on PicoBlaze, it is calculated in programs such as Octave and MatLab, and only programmed into PicoBlaze. Sometimes PI-controller (basically, that the sum of previous errors affects the output in addition to the current error) is necessary, but that also isn't horribly difficult to implement in assembly. Sometimes some dynamic programming is necessary, but the code for dynamic programming algorithms is usually short and easy to implement in assembly (easier than QuickSort at least), even though it is hard to understand how those algorithms work (or even what exactly they are doing: for a long time, I was mistaken that the LCS algorithm from dynamic programming is good for providing suggestions for misspelled variable names). Programming for embedded systems is a very different kind of programming than programming my PacMan game or programming the compiler for my programming language or my PicoBlaze assembler and emulator, it takes completely different skills, and the benefits of using a high-level language there are questionable.
While I have a lot of theoretical knowledge of programming, I don't have experience with writing long programs. The most complicated thing I've made is probably the PacMan game I've posted on this site. It's written mostly in Javascript, it's around
Everything on this website, including the animations and the game, is hand-written in HTML5 (a common name for CSS, Javascript and HTML). I haven't used any special web-designing tools nor frameworks. Looking at the source code of this website might help you study the HTML5, especially since I am still relatively a beginner (not knowing the "dirty tricks").
I hope that it will have some educational value. If you like the way I designed this website, you can make one that looks similar to mine by downloading the template I've designed here (I'll warn you that you will probably want to dodge it to work better in Safari on iPhone, I haven't bothered to make all the features available in a browser full of quirks and without the developer tools allowing me to explore them).
UPDATE on 10/02/2018: I've just made a simple arithmetic-expression-to-assembly compiler in JavaScript, runnable in a browser. I hope that playing with it will be useful in understanding how the programming languages work. If you are going to experiment with Assembly, you should probably use some virtualization software to protect the critical software on your computer from the damage poorly-written or malicious Assembly programs can do. You can read about my experience with free virtualization software here. (The back-end of that compiler can't work since this web-site is hosted on GitHub Pages now, and GitHub Pages doesn't support any PHP!)
UPDATE on 27/09/2019: I've just published two YouTube videos explaining why I think Donald Trump was wrong to ban Huawei, and why I think the new European Union Copyright directives won't be a big deal to the freedom of the Internet. You can see the video about the Huawei ban here (in case you have trouble playing it, try this and this), and you can see the video about Article 11 and Article 13 here (in case you have trouble playing that, try this and this).
UPDATE on 03/11/2019: I've made an example of how to implement the QuickSort algorithm (slightly modified to be easier to implement in a very low-level-language, but not much slower than the traditional QuickSort) in the programming language I made, with comments in Croatian. You can see that here.
UPDATE on 20/11/2019: I've just published a YouTube video showing how you can set up a modern computer to be able to program it in your own programming language.
UPDATE on 23/12/2019: I've just written a seminar in Croatian about my implementation of QuickSort in my programming language, you can download it here (it's available in many different formats, so you almost certainly don't need to install any software in order to read it). If you really can't open any of those files, try this one (the formatting is greatly distorted).
UPDATE on 03/05/2020: I've just made a program in C++ that converts musical notes written in a text file into a simple binary format that can be played by some programs (and converted into mainstream formats by programs that come with Linux), to study how sound is represented in a computer. You can see it, along with hearing an example song, here.
UPDATE on 14/05/2020: I've just made a program that will graphically present Huffman encoding (a primitive form of data compression), you can see it here. I haven't bothered to make it work in old browsers. (UPDATE: I've made it work in Internet Explorer 11, it was easier than I expected. Still, it works in fewer browsers than the PacMan game does, because I relied on advanced JavaScript syntax to describe the algorithm, and there is no obvious way to do it in old JavaScript.)
UPDATE on 23/05/2020: I've just made a program that calculates the properties of the distribution of the numbers in the multiplication table. I don't know how this mathematical distribution is called. It won't work in older browsers, and it will be very hard to make it work there.
UPDATE on 08/08/2020: The Arithmetic Expression Compiler language can now be used to target the JavaScript Virtual Machine using WebAssembly (the textual representation of JavaScript bytecode, which Mozilla has been pushing to get standardized). An example of that is the implementation of the permutation algorithm written in the Arithmetic Expression Compiler language and runnable in modern browsers.
UPDATE on 22/08/2020: I've started a Reddit thread about my programming language. (UPDATE on 18/12/2020: As well, I have written an informal specification of that language.)
UPDATE on 20/11/2020: As a part of a school project, I've written an assembler and a simulator for PicoBlaze (a small computer we use in laboratory exercises in our Computer Architecture classes) in JavaScript, which can be run in a modern browser (relatively modern ones, Internet Explorer 11 does not qualify, but some versions of Microsoft Edge which cannot run programs written in my programming language nevertheless can run that simulator and assembler, and so can Firefox 52, the last version of Firefox runnable on Windows XP). You can see it here.
UPDATE on 07/04/2022: I have made a video debunking Tony Heller's claims about the election fraud. However, YouTube refuses to let me upload it there, so I have uploaded it on GitHub Pages. My best guess as to why it cannot be uploaded is that YouTube's Artificial Intelligence thinks I am claiming the election fraud. Nothing could be further from the truth, I am critical of claiming such a thing. But that's how censorship using artificial intelligence works.
UPDATE on 05/01/2023: A question I often get asked on Internet forums is, if I have made my programming language, why haven't I also made my own operating system? The answer is fairly simple: While I do have some ideas about what a good programming language would look like and work internally (as you can probably tell by reading the documentation of my programming language), I have no idea what a good operating system would work like. So, I haven't made my own operating system, and I probably never will.
UPDATE on 25/02/2023: A few months ago, I published a paper in Valpovački Godišnjak which applies informatics to linguistics (the names of places in Croatia). It is mentioned in Glas Slavonije. It is basically this text, just a slightly different version.
UPDATE on 07/09/2023: Our control engineering professor Dražen Slišković posted on Moodle the questions he will ask us at the oral exam. So, I started writing answers to those questions. I am writing the mathematical formulas there using the MathJAX JavaScript framework.
UPDATE on 09/08/2024: I've written a blog-post about the problems I've run into with CSS over the years.
UPDATE on 01/09/2024: I've published a YouTube video explaining why I regret studying computer engineering (MP4).
A simple 3D animation
in Javascript.
Hover over it to rotate
the tetrahedron.