But the dashboard is not important at all, because everyone can have the same dashboard the same way you have it. It's like you are generating a static website using Hugo and apply a theme provided on it. The end product you get is something built by a streamline. No taste, no soul, no effort. (Of course, the effort is behind the design and produce of the streamline, but not the product produced by the streamline.)
Now, if you want to use the dashboard do something else really brilliant, it is good enough for means. Just make sure the dashboard is not the end.
Dashboard is just an example. The gist is how much of know-how that we use in our work can be replaced by AI transforming other people's existing work. I think it hinges on how many new problems or new business demands will show up. If we just work on small variations of existing business, then quickly our know-hows will converge (e.g. building a dashboard or a vanilla version of linear regression model), and AI will spew out such code for many of us.
Strictly speaking, Lua is not global by default. All free names, that is, all names unqualified with `local`, is actually indexed from a table `_ENV`, which is set to `_G`, the global environment. So, all free names are effectively global by default, but you can change this behavior by put this line at the top of your file `local _G = _G; _ENV = {};`. This way, all free names are indexed from this new table, and all access to the global names must explicitly be accessed through `_G`, which is a local variable now. However, I have never seen such practice. Maybe it is just too complicated to accept that all free names are global variables and you have to explicitly make it local.
Thanks to Lua’s great metaprogramming facilities, and the fact that _G is just a table, another workaround is to add a metamethod to _G that throws an error if you try to declare a global. That way you can still declare globals using rawset if you really want them, but it prevents you from declaring them accidentally in a function body.
Obviously, since the training material for such esoteric languages is scarce. (That's why they are esoteric!) So by definition, LLM will never be good at esoteric languages.
I have been watching for Typst for more than a year but there are still things Typst can not do as easily as LaTeX, see https://qwinsi.github.io/tex2typst-webapp/impl-in-typst.html for examples. So, I do not agree with the statement that Typst can fully replace LaTeX, at least, for now.
Other than the product itself, there are ecosystem issues as well. LaTeX has mature support in editors such as Emacs. However, support for Typst in Emacs is still in development. Thus, for now, I will keep using LaTeX, but I would keep Typst as an option.
That page has a reasonable re-creation, with trivial usage at call-sites, of each missing feature though? The only one that looks a bit revolting is the large pipe example
> I believe the AI agentic coders will threat tech giants more than it - collectively - threats software engineers.
Currently, I don't think so. Coding agents' performance generally depends on the quality of the model behind them. Running a powerful model is assets-dependent. Not everyone has the hardware and power to support Sonnet 4.5 or Gemini 3 even if they are open-source. So, before the top notch models can be deployed on personal computing devices, I would not say coding agents will threat any organization.
Recently there was this post which is largely generated by Claude Code. Read it.
reply