• 0 Posts
  • 58 Comments
Joined 1 year ago
cake
Cake day: July 23rd, 2023

help-circle


  • Since the forces that determine policy are largely tied up with corporate profit, promoting the interests of domestic companies against those of other states, and access to resources and markets, our system will misuse AI technology whenever and wherever those imperatives conflict with the wider social good. As is the case with any technology, really.

    Even if “banning” AI were possible as a protectionist measure for those in white-collar and artistic professions, I think it would ultimately be unfavorable with the ruling classes, since it would concede ground to rival geopolitical blocs who are in a kind of arms race to develop the technology. My personal prediction is that people in those industries will just have to roll with the punches and accept AI encroaching into their space. This wouldn’t necessarily be a bad thing, if society made the appropriate accommodations to retrain them and/or otherwise redistribute the dividends of this technological progress. But that’s probably wishful thinking.

    To me, one of the most worrying trends, as it’s gained popularity in the public consciousness over the last year or two, has been the tendency to silo technologies within large companies, and build “moats” to protect it. What was once an open and vibrant community, with strong principles of sharing models, data, code, and peer-reviewed papers full of implementation details, is increasingly tending towards closed-source productized software, with the occasional vague “technical report” that reads like an advertising spiel. IMO one of the biggest things we can lobby for is openness and transparency in the field, to guard against the natural monopolies and perverse incentives of hoarding data, technical know-how, and compute power. Not to mention the positive externality spillovers of the open-source scientific community refining and developing new ideas.

    It’s similar to how knowledge of the atomic structure gave us both the ability to destroy the world, or fuel it (relatively) cleanly. Knowledge itself is never a bad thing, only what we choose to do with it.


  • I take your point, but in this specific application (synthetically generated influencer images) it’s largely something that falls out for free from a wider stream of research (namely Denoising Diffusion Probabilistic Models). It’s not like it’s really coming at the expense of something else.

    As for what it’s eventually progressing towards - who knows… It has proven to be quite an unpredictable and fruitful field. For example Toyota’s research lab recently created a very inspired method of applying Diffusion models to robotic control which I don’t think many people were expecting.

    That said, there are definitely societal problems surrounding AI, its proposed uses, legislation regarding the acquisition of data, etc. Often times markets incentivize its use for trivial, pointless, or even damaging applications. But IMO it’s important to note that it’s the fault of the structure of our political economy, not the technology itself.

    The ability to extract knowledge and capabilities from large datasets with neural models is truly one of humanity’s great achievements (along with metallurgy, the printing press, electricity, digital computing, networking communications, etc.), so the cat’s out of the bag. We just have to try and steer it as best we can.



  • In a sense… yes! Although of course it’s thought to be across many modalities and time-scales, and not just text. Also a crucial piece of the picture is the Bayesian aspect - which also involves estimating one’s uncertainty over predictions. Further info: https://en.wikipedia.org/wiki/Predictive_coding

    It’s also important to note the recent trends towards so-called “Embodied” and “4E cognition”, which emphasize the importance of being situated in a body, in an environment, with control over actions, as essential to explaining the nature of mental phenomena.

    But yeah, it’s very exciting how in recent years we’ve begun to tap into the power of these kinds of self-supervised learning objectives for practical applications like Word2Vec and Large Language/Multimodal Models.













  • Some of the current thought on shortcomings of LLM capabilities actually takes influence from human cognitive science, and what can be learned from those with neurological impairments. It’s thought that human language abilities are strongly dissociated from other reasoning abilities because individuals with aphasia can lack the ability to speak or comprehend language, yet be able to solve mathematical problems, engage in logical reasoning, enjoy music, categorize objects and events, etc.

    It’s shown that LLMs develop a crude world model for performing reasoning tasks, yet it’s inextricably tied up with their language functionalities (since they are ONLY language based). The hope for future research is to develop AIs with world models and planning faculties that are decoupled from the language analysis module, which would mitigate hallucination and aid in interpretability.