If there is one thing Stallman knows well is the way he uses words and I can assure you if he calls something "evil" that is exactly the word he meant to use.
> user freedom, not creators freedom
In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.
RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.
I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.
Again, imho the GPL is a blessing for humanity, and bless the people that choose it freely.
> RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.
This is not true, though. As a copyright holder, you are allowed to license your work however you wish, even if it's under for example GPL-3.0-or-later or whatever. You can license your code outside of the terms of the GPL to a particular user or group of users for example for payment.
Really, it's only when the user agrees to abide by the license that you'd have to give access to source code when asked, for example.
> I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.
And they can. Whether that wins one any friends or not is another matter.
You can follow him on https://stallman.org/
What is he doing? I believe still giving talks and taking stance on current day political issues.
Additionally I believe the last few years where quite turbulent so I assume he is taking life at his own pace.
As soon as you forget (or your adversary manages to delete) an \0 at the end of any string, you may induce buffer overflows, get the application to leak secrets, and so on. Several standard library functions related to strings are prone to timing attacks, or have weird semantics that may expose you to attack. If you roll your own security-related functions (typical example: a scrubber for strings that hold secrets), you need to make sure these do not get optimised away by the compiler.
There's an awful lot of pitfalls and footguns in there.
I thought you meant a hello world or similar program only handling strings would be fundamentally insecure but rather you mean that it is hard to write secure code with C strings.
There are indeed a lot of pitfalls and footguns in C in general but I would argue that has more to do with c's memory focused design. I always feel like C strings are a bit of an afterthought but it does confirm well with the C design. Perhaps it is more so a syntax issue where the memory handling of strings is quite abstracted and not very clear to the programmer.
> I thought you meant a hello world or similar program only handling strings would be fundamentally insecure but rather you mean that it is hard to write secure code with C strings.
Disclaimer: I am not the author of the comment, and honestly I am more than happy if OpenBSD broke %n in printf because it looks awful from a security standpoint.
> you mean that it is hard to write secure code with C strings.
Indeed I do :) It is possible to write a "secure" hello world program in C; the point is that both the language and the standard library make it exceedingly easy to slip in attack vectors when you deal with strings in any serious capacity.
You would be surprised how many new mappers don't include a clear exit in their levels.
> heavily pierced with phrases like "ummm, hard to explain".
Good vs bad level design is always subjective as is generally the case in design, for example what is experienced as good or bad it is dependent on the preferences and the experience of the player, perhaps even the setting (competitive vs casual).
In level-design what quantifies a "good level" is very dependent on the game-design decisions learning this is very important to make good levels. For example even q1 and q2 differ in what is good design due to the technical differences of the two games (full 3d rendering in q2) good q1 levels are more fast pace "run and gun" while q2 enemies force a more tactical approach (they take more shots to kill). Even though some of the qualities of good design also overlap between several games, for example there is a overlap in all FPS game levels. Even in the same game multiplayer vs singleplayer maps have very different requirements.
> And for my next trick, I shall <drum roll> attempt to... </dr> ...define the undefinable!!!
While perhaps not perfect I think this article indicates some of the common pitfalls for inexperienced mappers. This is important especially for beginning mappers as it will allow them to grow a sense of good vs bad design quickly.
I think the author also didn't come up with these requirements out of nowhere, these are echoed throughout the (quake) mapping community and I think it is a good effort to put it into text and allow discussion for this particular game.
In my experience the quake mapping community is very welcoming to new mappers.
There are a had full of quake mapping discord channels where new and old mappers are sharing screenshots very regularly, tips and where you can ask for feedback and play-testing.
For anyone interested in what quake mapping is like these days, I can recommend the latest mapping jam "Quake Brutalist Jam III (QBJ3)". I believe there are a bunch of mappers releasing for the first time for this map pack.
EDIT: I looked into it and it seems the article is first captured around 2006, at that time the Quake scene was perhaps less welcoming to new mappers or maps of low quality.
I agree the ELIZA effect is strong, additionally I think it is some kind of natural selection.
I feel like LLM's are specifically selected to impress people that have a lot of influence. People like investors and CEO's. Because a "AI" that does not impress this section of the population does not get adopted widely.
This is one of the reasons I think AI will never really be an expert as it does not need to be. It only needs to adopt a skill (for example coding) to pass the examination of the groups that decide if it is to be used. It needs to be "good enough to pass".
I got this wild idea a short while ago and your comment helped cement it: probably one of the reasons why languages like Lisp are not "successful" has something to do with the impressability factor? If the people with money (and the decision) do not understand the tech or are not able to even fake that understanding, will they bet their money on it?
> If the people with money (and the decision) do not understand the tech or are not able to even fake that understanding, will they bet their money on it?
> Edgar Dijkstra called it nearly 50 years ago: we will never be programming in English, or French, or Spanish. Natural languages have not evolved to be precise enough and unambiguous enough. Semantic ambiguity and language entropy will always defeat this ambition.
This is the most important quote for any AI coding discussion.
Anyone that doesn't understand how the tools they use came to be is doomed to reinvent them.
> The folly of many people now claiming that “prompts are the new source code”,
These are the same people that create applications in MS Excel.
Further evidence: After all these years (far longer than we have been doing programming), we still don't do math in English or French or Spanish. We do it in carefully defined formal notations.
But so many problems (programming, but also physics and math) start as informal word problems. A major skill is being able to turn the informal into something formal and precise.
> These are the same people that create applications in MS Excel.
Ones that want their application to work? :) the last piece of software you should be knocking is MS Excel, in my 30+ year career the one common thread just about everywhere I worked (or contracted at) has used Excel to run some amazing sh*t
Everywhere I've worked as a software engineer the past 30 years I've seen Excel spreadsheets but rarely anything amazing, maybe once back in the 1990's at one place by an Excel guru but those are rare. 00% of the time Excel is used to make essentially table layouts of data maybe with some simple cell calculations.
I dunno much but I do know that if you can start a business that replaces Excel spreadsheets with an application(s) that your business builds you'd be the World's first trillionaire (many "tri" over) :)
Excel (or similar spreadsheet programs) is indeed great and has it's place. There are certain area's where there is no real replacement which is impressive. However I think that creating (interactive) applications is not one of the jobs Excel is the best tool for the job.
This exactly the argument I try to make Excel (spreadsheets) is a great interface for processing and working with certain type of data, think economic etc. but it is not great for other things. There we need a different interface to efficiently communicate our intent. For example programming languages or even writing a novel would not work very well in a Excel sheet (though no doubt someone has attempted it).
I think programmets often underestimate the power of Excel for non-programmers, it in practice runs the business world.
I think that it also is a comparable to the ai side we see now.
Do something for real, use real database or programmer.
Non-programmer needs something, vibe code or excel
This post reminded me of the blog posts [0] regarding the "megapixels" camera app for the pinephone , written by Martijn Braam.
For those interested it dives quite deep into the color profiling and noise reduction and more to make the pinephone camera usefull.
A DSLR and mobile phone camera optimize for different things and can't really be compared.
Mobile phone camera's are severely handicapped by the optics & sensor size. Therefore to create a acceptable picture (to share on social media) they need to do a lot of processing.
DSLR and professional camera's feature much greater hardware. Here the optics and sensor size/type are important it optimize the actual light being captured. Additionally in a professional setting the image is usually captured in a raw format and adjusted/balanced afterwards to allow for certain artistic styles.
Ultimately the quality of a picture is not bound to it's resolution size but to the amount and quality of light captured.
> A DSLR and mobile phone camera optimize for different things and can't really be compared.
You sound exactly like the sales guy trying to explain why that Indigo workstation is “different” even though it was performing the exact same vector and matrix algebra as my gaming GPU. The. Exact. Same. Thing.
Everything else you’ve said is irrelevant to computational photography. If anything, it helps matters because there’s better raw data to work with.
The real reason is that one group had to solve these problems, the other could keep making excuses for why it was “impossible” while the problem clearly wasn’t.
And anyway, what I’m after isn’t even in-body processing! I’m happy to take the RAW images and grind them through an AI that barely fits into a 5090 and warms my room appreciably for each photo processed.
most likely one reason is that to do that, you'd have to pair the price of a fancy smartphone to a nice camera, so adding ~$1000 for a feature professionals often prefer to do offline since they can get good focus and color using optics and professional lights
There are many things wrong with this. I have an iPhone 17 Pro Max and use it to capture HEIF 48 and ProRAW images for Lightroom. There’s no doubt of the extraordinary capabilities of modern phone cameras. And there are camera applications that give you a sense of the sensor data captured, which only further illustrates the dazzling wizardly between sensor capture vs the image seen by laypeople.
That said, there is literally no comparison between the iPhone camera and the RAW photos captured on a modern full-frame mirrorless camera like my Nikon Z6III or Z9. I can’t mount a 180-600mm telephoto lens to an iPhone, or a 24-120mm, or use a teleconverter. Nor can I instantly swing an iPhone and capture a bird or aircraft flying by at high speed and instantly lock and track focus in 3D, capture 30 RAW images per second at 45MP (or 120 JPEGs per second), all while controlling aperture, shutter speed and ISO.
Physics is a thing. The large sensor size and lenses (that can make a Mac Studio seem cheap by comparison) serve a purpose. Try capturing even a remotely similar image on an iPhone in low light, and especially RAW, and you’ll be sitting there waiting seconds or more for a single image. Professional lenses can easily contain 25 individual lens elements that move in conjunction as groups for autofocus, zoom, motion stabilization, etc. They’re state-of-the-art modern marvels that make an iPhone’s subject detection pale by compare. Examples:
I can lock on immediately to a small bird’s eye 300 feet away with a square tracking the tiny eye precisely, and continue tracking. The same applies to pets, people, vehicles, and more with AI detection.
You can handhold a low-light shot at 1/15s to capture a waterfall with motion blur and continue shooting, with the camera optimizing the stabilization around the focus point—that’s the sensor and lens working in conjunction for real-time stabilization for standard shots, or “sports mode” for rapidly panning horizontally or vertically.
There’s a reason pro-grade cameras exist and people use them. See Simon D’entrement, Steve Perry, and many others on YouTube for examples.
For most people, it doesn’t matter. They can happily shoot still images and even amazingly high-quality video these days. But dismissing the differences is wildly misleading. These cameras require memory cards that cost half as much or more than the latest iPhone, and for good reason [1].
With everything, there are
trade offs. An iPhone fits in my pocket. A Nikon Z8 and 800mm lens and associated gear is a beast. Different tools, different job.
You are totally missing my point and talking past me. I have a Nikon Z8! I know what it is capable of!
The point I'm trying to make is that the RAW images coming out of a modern full-frame camera get very "light" processing in a typical workflow (i.e.: Adobe Lightroom), little more than debayering before all further treatment is in ordinary RGB space.
Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!
The difference is that mobile phones capture and digitally merge multiple frames captured in a sequence to widen the HDR dynamic range and reduce noise. They can even merge images taken from slightly different perspectives or with moving objects. They also apply tricks like debayering that is aware of pixel-level sensor characteristics and is tuned to the specific make and model instead of shared across all cameras ever made, which is typical of something like Lightroom, Darktable, or whatever.
If I capture a 20 fps burst with a Nikon Z series camera... I can pick one. That's about the only operation I can do with those images! Why can't I merge multiple exposures with motion compensation to get an effective 10 ISO instead of 64, but without the blur from camera motion?
None of this has anything to do with lenses, auto-focus, etc...
I'm talking about applying "modern GPU" levels of computer power to the raw bits coming off a bayer sensor, whether that's in a phone or a camera. The phone can do it! Why can't Lightroom!?
> I have a Nikon Z8! I know what it is capable of!
It seems to me you underestimate the amount of work your camera is already doing. I feel like you overestimate the raw quality of a mobile camera as well.
> Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!
There may be the same amount of bits but that doesn't mean that it captures the same quality of signal. It's like saying that a higher amount of bits on a ADC correspond to a better quality signal on the line, it just isn't true. Megapixels are overhyped, resolution isn't everything for picture quality.
> The phone can do it! Why can't Lightroom!?
Be the change you want to see, if the features that you want are not in Lightroom write a tool to implement it (or add the features to a tool like ffmpeg). The features you are talking about are in just software after capture so it should be possible from the camera's raw.
Perhaps you would be better of buying a high quality point and shoot camera or just using your phone instead of a semi professional full-frame camera for your purpose. With a DSLR you have options how to process, if that means in your "typical workflow" light processing then that's up to you. perhaps If you want to point shoot, instagram you indeed don't want to spend time processing in Lightroom and that's fine.
It feels like you are complaining about how your expensive pickup can't fit your family and suitcases when going on holiday like the neighbors SUV even though they have the same amount of horsepower and are build on the same chassis. They are obviously build for different purposes.
> user freedom, not creators freedom
In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.
reply