BRRRR skibidi bop bop bop yes yes
I’m 2 and I use a smartphone that only executes Fortran through punch cards.
There actually exists an open source community for reverse-engineering EV motors, inverters, battery charging modules, BMS, and everything else necessary to build a DIY car from scrapyard components: https://openinverter.org/wiki/Main_Page
In my opinion he should step down as CEO of Linux, and hand the job over to someone more qualified, like Ethan Zusks.
I get that, but what I’m saying is that calling deep learning “just fancy comparison engine” frames the concept in an unnecessarily pessimistic and sneery way. It’s more illuminating to look at the considerable mileage that “just pattern matching” yields, not only for the practical engineering applications, but for the cognitive scientist and theoretician.
Furthermore, what constitutes being “actually creative”? Consider DeepMind’s AlphaGo Zero model:
Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo’s playing style.
Professional Go players and champions concede that the model developed novel styles and strategies that now influence how humans approach the game. If that can’t be considered a true spark of creativity, what can?
To counter the grandiose claims that present-day LLMs are almost AGI, people go too far in the opposite direction. Dismissing it as being only “line of best fit analysis” fails to recognize the power, significance, and difficulty of extracting meaningful insights and capabilities from data.
Aside from the fact that many modern theories in human cognitive science are actually deeply related to statistical analysis and machine learning (such as embodied cognition, Bayesian predictive coding, and connectionism), referring to it as a “line” of best fit is disingenuous because it downplays the important fact that the relationships found in these data are not lines, but rather highly non-linear high-dimensional manifolds. The development of techniques to efficiently discover these relationships in giant datasets is genuinely a HUGE achievement in humanity’s mastery of the sciences, as they’ve allowed us to create programs for things it would be impossible to write out explicitly as a classical program. In particular, our current ability to create classifiers and generators for unstructured data like images would have been unimaginable a couple of decades ago, yet we’ve already begun to take it for granted.
So while it’s important to temper expectations that we are a long way from ever seeing anything resembling AGI as it’s typically conceived of, oversimplifying all neural models as being “just” line fitting blinds you to the true power and generality that such a framework of manifold learning through optimization represents - as it relates to information theory, energy and entropy in the brain, engineering applications, and the nature of knowledge itself.
Calling someone “stupid” or “dumb” is all too common, especially online in places like Reddit and Twitter. I think it is a lazy and vacuous statement, or at best just a way to vent frustration.
It’s much better, and more constructive, to be specific about what you find reprehensible. It could be that they have horrible morals, and calling them stupid is like a shorthand for saying that they are unable to reason through towards a consistent and correct set of moral principles. Or it could be that they have been indoctrinated into nasty world-views, and that their “stupidity” is exhibited as a failure to protect themselves from the indoctrination or escape it. Or they could be deliberately hurtful trolls who say outrageous and inflammatory things to upset others, in which case their “stupid” behavior is most likely an outward-facing reaction to some trauma in their own lives. Or maybe they are just sadistic, which warrants being called out specifically, and not just attributed to stupidity. A lot of anti-intellectual posturing seems to come from some combination of these causes.
Anyway, I feel like being specific about your criticisms not only promotes compassion (which is ultimately most likely to win over those we disagree with) but also prompts you to more thoughtfully reflect on your own positions.
This is orthogonal to the topic at hand. How does the chemistry of biological synapses alone result in a different type of learned model that therefore requires different types of legal treatment?
The overarching (and relevant) similarity between biological and artificial nets is the concept of connectionist distributed representations, and the projection of data onto lower dimensional manifolds. Whether the network achieves its final connectome through backpropagation or a more biologically plausible method is beside the point.