Apparently there are several narratives in regards to AI girlfriends.
- Incels use AI girlfriends given that they can do whatever they desire.
- Forums observing incel spaces agree that incels should use AI girlfriends to leave real women alone
- The general public having concerns towards AI girlfriends because their users might be negatively impacted by their usage
- Incels perceiving this as a revenge fantasy because “women are jealous that they’re dating AI instead of them”
- Forums observing incel spaces unsure if the views against AI girlfriends exist in the first place due to their previous agreement
I think this is an example of miscommunication and how different groups of people have different opinions depending on what they’ve seen online. Perhaps the incel-observing forums know that many of the incels have passed the point of no return, so AI girlfriends would help them, while the general public perceive the dangers of AI girlfriends based on their impact towards a broader demographic, hence the broad disapproval of AI girlfriends.
Thank you very much for the links. I’m going to read that later. It’s a pretty long article…
I’m not sure about the impending AI doom. I’ve refined my opinion lately. I think it’ll take most of the internet from us. Drown out meaningful information and spam it with low quality clickfarming text / misinformation. And the “algorithms” of TikTok, YouTube & Co will continue to drive people apart and confine people in seperate filter bubbles. And I’m not looking forward to each customer service being just an AI… I don’t quite think it’ll happen through loneliness though. Or in an apocalypse like in terminator. It’s going to be interesting. And inevitable in my eyes. But we’ll have to see if science can tackle hallucinations and alignment. And if the performance of AI and LLMs is going to explode like in the previous months, or if it’s going to stagnate soon. I think it’s difficult to make good predictions without knowing this.
Hmmh. Sometimes I have difficulties understanding you.
[Edit: Text removed.]If your keys are to small, you should consider switching to a proper computer keyboard, or an (used) laptop.Regarding the exponential growth: We have new evidence that supports the position it’ll plateau out: https://youtube.com/watch?v=dDUC-LqVrPU Further research is needed.
Sure. Multimodality is impressive. And there is quite some potential there. I’m sure robots / androids are also going to happen and all of this has a profound impact. Maybe they’ll someday get affordable to the average Joe and I can have a robot do the chores for me.
But we’re not talking about the same thing. The video I linked suggests that performance might peak and plateau. That means it could be very well the case that we can’t make them substancially more intelligent than say ChatGPT 4. Of course we can fit AI into new things, innovate and there is quite some potential. It’s just about performance/intelligence. It’s explained well in the video. (And it’s just one paper and the already existing approaches to AI. It doesn’t rule out science finding a way to overcome that. But as of now we don’t have any idea how to do that, instead of pumping millions and millions of dollars into training to achieve a smaller and smaller return in increased performance.)
Hmmh. I’m a bit split on bio implants. Currently that’s hyped by Elon Musk. But that field of neuroscience has been around for some while. They’re making steady (yet small) progress. Elon Musk didn’t contribute anything fundamentally new. And I myself think there is a limit. I mean you can’t stick a million needles into a human brain everywhere from the surface to deep down, to hook into all brain regions. I think it’s mostly concerned with what’s accessible from the surface. And that’d be a fundamental limitation. So I doubt we’re going to see crazy things like in the sci-fi movies like The Matrix or Ready Player One. But I’m not an expert on that.
With that said, I share your excitement for what’s about to come. I’m sure there is lots of potential in AI and we’re going to see crazy things happen. I’m a bit wary if the consequences like spam and misinformation flooding the internet and society, but that’s already inevitable. My biggest wish is science finding a way to teach LLMs when to make up things and when to stick to the truth… What people call “hallucinations”. I think it’d be the next biggest achievement if we had more control about that. Because as of now the AIs make up lots of facts that are just wrong. At least that’s happening to me all the time. And they also do it when doing tasks like summarization. And that makes them less useful for my every-day tasks.
Profound impact. They’ll be affordable when we’re worth more with them than without them, and it’s profitable to someone and lucrative enough to someone else for us to have them.
It’s not lost on me that a video you linked suggests that performance might peak and plateau. And it’s the future being guessed at, long-term or short-term, by individuals who are not well-enough informed to offer much besides a forecast somewhat like the weather. Humans are reluctant to educate themselves deeply, cross-discipline, through experiential learning and to have lived experience; immersed for a period of time with whatever they’re attempting to forecast. Difficult to live in the future for awhile to know it well enough to perhaps predict successfully… With some exceptions, of course, and not offering that it’s entirely not predictable…
-“very well the case that we can’t make them substantially more intelligent than say ChatGPT-4” - GPT-4 after it self-reflected and self-improved was substantially more intelligent than GPT-4, a year ago, I think. There is absolutely no way that they AREN’T already substantially more intelligent than say ChatGPT-4. No way. And by whatever measure, there’s no way that they won’t get MUCH more intelligent. Absolutely no way. We don’t need new tech to accomplish something like this. Slightly older tech would work just fine, and ‘smarter not harder’ is successful even if being ‘ghetto’ with a setup were important. It’s just hooking it together and dumping power into it.
To be clear, I’m not saying anything about how many systems are more intelligent, but let’s imagine for a moment that 4 countries connected their supercomputors that don’t get as far as being purveyed to the common individual or make the news. Each of those systems is already FAR past GPT-4. Those four systems working together? substantially more intelligent. We’re not even to efficient computing yet, and DeepSouth runs on just 20 watts.
If this is confusing then look at military technology and tell me how many of the top-secret projects we know the governments (and military industrial complexes) all work on were common knowledge to the public. I invite you to reconsider.
To the corporations who are working with the best systems, “millions and millions of dollars” is something they could drop on the ground and lose, and not even blink about losing. Like you dropped a dollar. There’s no value in the money without spending it. It’s a figment of the collective imagination until the money does something. It’s paper with ink. Pieces of plastic and metal. Electrical impulses. Being in debt (correctly) is valuable. Communication; interaction is far more valuable. Because afterwards the money might do something. These people borrow tens of billions without even thinking about it. Receive investments of hundreds of billions.
When hackers set up for an $8B dollar heist, they’re happy when they get only a billion.
If they wanted to play “Joker” and light millions upon millions on fire to watch it burn, they’d never miss the money.
Bio implants? I enjoyed ‘The Artificial Kid’ when I was growing up…
Bad until proven for 75 years by a continuous case-study group of 100M or so?..
-“stick a million needles” - No need. External systems have already been developed to mind read. Doesn’t mean they do it really well yet, but we’re not at sticking 1 million needles in, because chips and connections can be grown in the tissue, and don’t you think the human body already has the low-voltage circuitry? I feel like you’re squandering the resources we made available to you, Rufus.
-“a bit wary if the consequences like spam and misinformation flooding the internet and society” - You mean like “religion” and “books” flooded the entire world?😉 From Strange Days - It’s not how paranoid you are. It’s whether you’re paranoid enough.
-“I think it’d be the next biggest achievement if we had more control about that.” Rufus, humans got where we are because of hallucinations. Probably became human because of them. Probably survived as a species because of them.
Please, come back and catch up with me after you get more familiarized with some of the resources we gave you access to. I look forward to more conversations.
With the worth, that’s an interesting way to look at it.
I don’t think you grasped how exponential growth works. And the opposite: logarithmic growth. It means at first it grows fast. And then slower and slower. If it’s logarithmic, it means at first you double the computing power and you get a big return… Quadruple the performance or even more… But it’ll get less quickly. At some point you’re like in your example, connecting 4 really big supercomputers, and you just get a measly 1% performance gain over one supercomputer. And then you have to invest trillions of dollars for the next 0.5%. That’d be logarithmic growth. We’re not sure where on the curve we currently are. We’ve sure seen the fast growth in the last months.
And scientists don’t really do forecasts. They make hypotheses and then they test them. And they experimentally justify it. So no, it’s not the future being guessed at. They used a clever method to measure the performance of a technological system. And we can see those real-world measurements in their paper. Why do you say the top researchers in the world aren’t “well-enough informed” individuals?
FOR YOUR CONSIDERATION
Synthesized Consensus
Exponential Growth (25+ individuals): Most expect rapid, continued growth over the next 8-15 years, often linked to advancements in technology and AI’s integration into various sectors.
Logarithmic Growth (17+ individuals): Many foresee significant early advancements that will gradually plateau, influenced by ethical, societal, and practical challenges.
S-curve Growth (8 individuals): A few predict periods of rapid innovation followed by a stabilization as AI reaches maturity or encounters insurmountable hurdles.
Given the various perspectives offered by the panel on the initial phase of AI growth, let’s extend the reasoning to speculate about what might happen beyond the next 8-15 years:
Those predicting Exponential Growth (indefinite), like Larry Page, Elon Musk, and Mark Zuckerberg, might suggest that AI growth could continue to escalate without a foreseeable plateau. They likely envision ongoing, transformative innovations that continuously push the boundaries of AI capabilities.
Those foreseeing Exponential Growth for a finite period (e.g., Andrew Ng, Yann LeCun, Demis Hassabis) might anticipate a shift after the initial rapid growth phase. After the high-growth years, they might predict a transition to a slower, more sustainable growth pattern or a plateau as the AI industry matures and technological advancements face diminishing returns or run up against theoretical and practical limitations.
Proponents of Logarithmic Growth, like Ian Goodfellow, Daphne Koller, and Safiya Noble, generally expect growth to slow and eventually plateau. Post the initial period of significant advancements, they might predict that the AI field will stabilize, focusing more on refinement and integration rather than groundbreaking innovations. Ethical, regulatory, and societal constraints could increasingly play a role in moderating the speed of development.
Advocates of S-curve Growth, such as Gary Marcus and Peter Thiel, typically envision that after a period of rapid innovation, growth will not only plateau but could potentially decline if new disruptive innovations do not emerge. They might see the field settling into a phase where AI technology becomes a standard part of the technological landscape, with incremental improvements rather than revolutionary changes.
Special Considerations: Visionaries like Eliezer Yudkowsky, who speculate about AI reaching superintelligence levels, might argue that post-15 years, the landscape could be radically different, potentially dominated by new AI paradigms or even AI surpassing human intelligence in many areas, which could either lead to a new phase of explosive growth or require significant new governance frameworks to manage the implications.
Overall, the panel’s consensus beyond the next 8-15 years would likely reflect a mixture of continued growth at a moderated pace, potential plateaus as practical limits are reached, and a landscape increasingly shaped by ethical, societal, and regulatory considerations. Some may also entertain the possibility of a decline if no new significant innovations emerge.
deleted by creator
deleted by creator
I don’t argue that there are likely and possibly limits at some point on some scale.
You’re not unpacking ANY of the nuances which contribute to function and performance when you look at “exponential vs logarithmic” and set it atop the concept of returns. I feel that this reductive approach is like taking “good vs bad” and setting it atop “human behavior”. There’s the whole rest of the world of conversations & considerations, however, which play in once discussion of theory moves into details. Yes I know there are papers discussing this concept, and the discussion is not getting into all the other factors which improve performance.
Pick an expert who says exponential… Pick an expert who says logarithmic… Pick your nose…
Doesn’t mean someone’s right and someone’s wrong, thanks.
We’re on the same page as far as a presentation that at some point somewhere for some possible reason improvement and capacity may plateau.
To be candid, If the grid went down improvement and capacity wouldn’t even gradually plateau and that has nothing to do with laws, theories and predictions.
Again, we haven’t even discussed DNA data storage and computing, ultra-low-volt hybrid systems, hyperdimensional computing and vectors, holographic data storage…
Don’t bother telling me that these things have all been studied and documented thoroughly, thanks.
I don’t even want to get into quantum computing or quantum structures in the brain.
We’re clear there’s a theory floating around from a camp, that things might plateau. And that it’s opposed by another camp.
We’re clear.
LLMs far superior to GPT-4 were functioning last year and LLMs are already in working robots, some 9th generation iterations.
-“And scientists don’t really do forecasts. They make hypotheses and then they test them. And they experimentally justify it.”
FORECASTS BY SCIENTISTS VERSUS SCIENTIFIC FORECASTS - https://faculty.wharton.upenn.edu/wp-content/uploads/2007/07/54-JSA-Global-Warming-Forecasts-by-Scientists-versus-Scientific-Forecasts.pdf
You know that scientists test their climate models by using them to forecast past and future climates, right? …scientists…forecast…
“Predictive models forecast what will happen in the future.” - https://learn.genetics.utah.edu/content/earth/predictions" “Correct predictions alone don’t make for a good scientific model.” - https://www.scientificamerican.com/article/the-truth-about-scientific-models/
“Prediction involves estimating an outcome with a high level of certainty, usually based on historical data and statistical modeling. On the other hand, a forecast involves projecting future developments but with a certain level of uncertainty due to external factors that may impact the outcome.” https://plat.ai/blog/difference-between-prediction-and-forecast.
We’re going to need to meet at the reality that historical data doesn’t necessarily mean a thing about the future. In 1903, New York Times predicted that airplanes would take 10 million years to develop Only nine weeks later, the Wright Brothers achieved manned flight. The pathologically cynical always will find a reason to complain. https://bigthink.com/pessimists-archive/air-space-flight-impossible/ Just because a statistical model has a track record doesn’t mean it is, or will continue to be. Statistics are estimates.
Thank you. I went to high school and graduated. My father taught chemistry, physics and computers for 40 years.
-“So no, it’s not the future being guessed at” If it’s not happening now and we have more curve to be placed on, my apologies but it is happening in the future, after future developments and future technologies very likely may have come into play. Animal evolution can occur in one generation. Please don’t suggest that things beyond our understanding won’t affect the curve, in the future since we’re still ‘climbing the curve’? Thank you.
-Law of Penrose’s Triangle defied? Looks like it -Moore’s Law broken? Yes -Kryder’s Law broken? Yes -The speed of light broken? Yes -Light has been stopped (paused in place) and restarted in transit?* Yes -Organic tissue is growing on circuit boards? Yes
-“They used a clever method to measure the performance of a technological system.” - Alright. Doesn’t mean it’s true or even likely anymore.
-“And we can see those real-world measurements in their paper.” - Sure. They took and recorded measurements.
How many dimensions are there? 6, right? 14? Is gravity a constant?
‘The perils of predicting the future based on the past’ - https://medium.com/swlh/the-perils-of-predicting-the-future-based-on-the-past-9de0f248c183
The statement “By looking at the past we can predict the future” encapsulates the idea that historical patterns and events can provide insights that help us anticipate future outcomes. This concept is often associated with the field of predictive analytics and forecasting. While it is true that studying the past can offer valuable information and trends that may be indicative of future events, it is important to recognize that the future is inherently uncertain and unpredictable.
-“Funny you’d say the top researchers in the world aren’t “well-enough informed” individuals.” - Absolutely. They don’t know jack sh!t about the rest of the world and how everything else influences their specialty in reality, instead of on paper. They certainly aren’t well-informed in all the cross-disciplinary fields. They don’t collaborate with all the other related specialists.
*https://www.abc.net.au/news/science/2016-09-27/scientists-stop-light-like-star-wars-in-cloud-of-atoms/7867344
https://en.wikipedia.org/wiki/Scientific_method
No. Science isn’t done by a vote of majority. It’s the objective facts that matter. And you don’t pick experts or perspectives, that’s not scientific. It’s about objective truth. And a method to find that.
We’re now confusing science and futurology.
And I think scientists use the term “predict” and not “forecast”. There is a profound difference between a futorologist forecasting the future, and science developing a model and then extrapolating. The Scientific American article The Truth about Scientific Models you linked sums it up pretty well: “They don’t necessarily try to predict what will happen—but they can help us understand possible futures”. And: “What went wrong? Predictions are the wrong argument.”
And I’d like to point out that article is written by one of my favorite scientists and science communicators, Sabine Hossenfelder. She also has a very good YouTube channel.
So yes, what about DNA, quantum brains, Moore’s law, … what about other people claiming something. That all doesn’t change any facts.
Synthesized Consensus
This role-played synthesis suggests a general optimism for the near to mid-term future of AI, with a consensus leaning towards exponential growth, though moderated by practical, ethical, and societal considerations.
Given the various perspectives offered by the panel on the initial phase of AI growth, let’s extend the reasoning to speculate about what might happen beyond the next 8-15 years:
Overall, the panel’s consensus beyond the next 8-15 years would likely reflect a mixture of continued growth at a moderated pace, potential plateaus as practical limits are reached, and a landscape increasingly shaped by ethical, societal, and regulatory considerations. Some may also entertain the possibility of a decline if no new significant innovations emerge.
When examining the growth pattern over time of AI intelligence and capacity, and the growth of the field of AI, several key factors should be considered to gain a comprehensive understanding:
Technological Limitations and Breakthroughs: Understanding the potential technological breakthroughs as well as limitations is crucial. This includes hardware advancements, such as quantum computing and neuromorphic technology, which could radically alter AI’s capabilities and growth trajectory. Consider how close we are to fundamental physical limits of computing.
Economic Factors: The economic viability of AI innovations plays a significant role in its development. Assess the investment trends, market demand, and economic cycles that could either accelerate or slow down AI adoption.
Societal Impact and Acceptance: The acceptance of AI by society, influenced by factors like job displacement, privacy concerns, and trust in AI decisions, significantly affects the pace at which AI technologies are adopted and integrated into everyday life.
Regulatory and Ethical Considerations: As AI becomes more integrated into critical areas of life and business, regulatory and ethical oversight will increase. The development of international norms and regulations could either foster a supportive environment for AI development or impose restrictions that might slow down progress.
Interdisciplinary Collaboration: AI’s growth is increasingly dependent on insights from various fields such as psychology, neuroscience, ethics, and public policy. The depth and nature of these interdisciplinary collaborations can influence the directions and applications of AI.
Geopolitical Influences: The strategic priorities of nations regarding AI, including national security concerns, can drive the speed and direction of AI development. Competition between countries might spur rapid advancements, while international tensions could also lead to fragmented technology ecosystems.
Environmental Impacts: The environmental cost of training large AI models and maintaining AI infrastructure is becoming an important consideration. Sustainable practices in AI development could become a significant factor influencing growth patterns.
Feedback Loops in AI Evolution: As AI systems become capable of participating in their own design and improvement processes, feedback loops could significantly accelerate the pace of AI advancements. This self-improving AI could lead to growth patterns that are difficult to predict based on historical data alone.
Public Perception and Media Influence: How AI is portrayed in the media and public discourse can impact regulatory and market dynamics. Public fears or support can lead to significant shifts in policy and investment.
By considering these factors, you can develop a more nuanced view of how AI might evolve and impact various aspects of life and society, enabling better strategic planning and decision-making in relation to AI technologies.
Predicting the growth of AI and its impact on various sectors involves a complex interplay of multiple scientific, technological, and socioeconomic factors. Several predictive laws and theories have been used to forecast technology development, including AI. Here are a few prominent ones:
Moore’s Law: Historically used to predict the doubling of transistors on a microchip approximately every two years, this law has implications for the computational power available for AI systems. Although the pace of Moore’s Law has slowed, the principle that hardware capability could grow exponentially has fueled expectations for AI performance improvements.
Kurzweil’s Law of Accelerating Returns: Ray Kurzweil proposed this theory, suggesting that technological change is exponential. According to Kurzweil, as each generation of technology improves, it accelerates the development of the next generation, leading to faster and more profound changes. This theory is often cited in discussions about AI’s potential to achieve rapid advancements in a relatively short time.
Wright’s Law: Also known as the learning curve theory, Wright’s Law states that for every cumulative doubling of units produced, costs will fall by a constant percentage. In the context of AI, this can be applied to the improvement of algorithms and the reduction of computational costs over time as more AI systems are developed and deployed.
Gilder’s Law: This law focuses on the bandwidth of communication networks doubling every 21 months. As AI systems often depend on vast data transfers, improvements in network capabilities can significantly impact AI development and deployment.
Metcalfe’s Law: This law states that the value of a network is proportional to the square of the number of its users. For AI, this could be analogous to the idea that as more data sources and AI systems connect and interact, the overall value and capability of these systems increase exponentially.
Are There Reliable Studies Offering Definitive Answers?
While these laws provide frameworks for thinking about the growth of technology, including AI, they are not without their limitations and criticisms. The development of AI is influenced not just by technological advancements but also by a variety of other factors including regulatory policies, ethical considerations, economic conditions, and societal acceptance. This makes it challenging to predict the growth of AI with high accuracy using any single law or model.
Empirical Studies and Forecasts: There are numerous studies and reports from reputable organizations such as the McKinsey Global Institute, Gartner, and the Stanford AI Index that analyze trends and make forecasts about AI development. However, these predictions are often based on current and historical data and may not fully account for unexpected breakthroughs or setbacks.
Consensus in the Scientific Community: Generally, there is no single definitive study that can predict the exact trajectory of AI development. The field is evolving rapidly, and new variables can emerge that significantly alter the landscape. Most accurate predictions tend to be short-term and become less reliable as they extend into the future.
In summary, while scientific laws and theories like Moore’s Law and Kurzweil’s Law of Accelerating Returns provide useful insights, they should be viewed as part of a broader set of tools for understanding the potential growth of AI. They need to be supplemented with continuous observation of emerging trends, technological breakthroughs, and shifts in policy and public sentiment to more accurately forecast the future of AI.
The expansion of AI into space introduces a whole new paradigm with unique opportunities and challenges. Here are a few ways this panel might view AI’s role in space exploration and expansion:
Enhanced Autonomy in Space Exploration: Leaders like Elon Musk and Larry Page, who are already invested in space technology through their companies, might foresee AI as crucial for managing autonomous spacecraft, probes, and robotic systems. AI could handle complex tasks like navigation, maintenance, and decision-making in environments where human oversight is limited by distance and communication delays.
AI in Space Colony Management: Visionaries such as Sam Altman and Demis Hassabis might predict that AI will play a significant role in managing habitats and life-support systems on other planets or moons. These systems would require high levels of automation to ensure the safety and efficiency of off-world colonies.
AI for Scientific Research in Space: Scientists like Geoffrey Hinton and Yoshua Bengio could see AI as a tool to process vast amounts of data from space missions, helping to make discoveries that are beyond human analytical capabilities. AI could autonomously manage experiments, analyze extraterrestrial materials, and monitor celestial phenomena.
AI in Space Resource Utilization: Business leaders like Jeff Bezos, who has expressed interest in space through Blue Origin, might consider AI crucial for identifying and extracting resources. AI could control robotic miners and processing facilities, optimizing the extraction of water, minerals, and other materials essential for space colonization and possibly even for return to Earth.
Ethical and Governance Challenges: Ethicists and regulatory-focused professionals like Joy Buolamwini and Miriam Vogel might raise concerns about deploying AI in space. They could focus on the need for stringent protocols to govern AI behavior, avoid potential conflicts over space resources, and ensure that space exploration remains beneficial and accessible to all humanity, not just a few privileged entities.
Long-term AI Evolution: Futurists like Eliezer Yudkowsky might speculate on how AI could evolve uniquely in the space environment, potentially developing in ways that differ significantly from Earth-based AI due to different operational challenges and evolutionary pressures.
In this new off-planet context, AI’s growth could continue to accelerate in unique directions, facilitated by the absence of many constraints present on Earth, such as physical space and regulatory barriers. This could lead to new forms of AI and novel applications that could feed back into how AI evolves and is applied on Earth.
Given the unique opportunities and challenges presented by space exploration, the panel of AI and business leaders might envision several likely patterns of growth for AI in this context:
Accelerated Innovation and Specialization: As AI systems are tasked with operating autonomously in space environments, we can expect a surge in innovation aimed at developing highly specialized AI technologies. These AIs would be designed to withstand the harsh conditions of space, such as radiation, vacuum, and extreme temperatures, and to perform without direct human supervision. This could lead to rapid growth in specific AI domains like robotic autonomy, environmental monitoring, and resource extraction technologies.
Integration with Space Technologies: The integration of AI with space technology would likely become more profound. AI could be instrumental in designing spacecraft and habitat modules, optimizing flight trajectories, and managing energy use. This integration might follow an exponential growth curve initially, as breakthroughs in AI-driven space technologies lead to further investments and interest in expanding these capabilities.
Scalable Deployment Models: Given the cost and complexity of space missions, AI systems designed for space might initially focus on scalability and adaptability. This could lead to growth patterns where AI systems are incrementally upgraded and expanded upon with each successive space mission, rather than replacing them entirely. As such, growth could be steady and sustained over a long period, following a more logarithmic pattern as technologies mature and become standardized.
Collaborative International Frameworks: As countries and private entities push further into space, international collaborations involving AI could become necessary. This could stimulate a steady growth of AI technologies as frameworks are developed to ensure that AI systems can interoperate seamlessly across different platforms and missions. These collaborative efforts might stabilize the growth rate, moving it towards a more predictable, linear path.
Regulatory and Ethical Adaptation: Ethical and regulatory considerations will also shape AI’s growth trajectory in space. As AI systems take on more responsibilities, from running life support systems to conducting scientific research, ensuring these systems operate safely and ethically will become paramount. Growth might initially be rapid as regulations struggle to keep up, but eventually, a plateau could occur as stringent standards and international agreements are put in place.
Transformational Growth Phases: Over the long term, as AI starts enabling deeper space exploration and potentially the colonization of other planets, we could witness transformational growth phases where AI development leaps forward in response to new challenges and environments. These phases might appear as spikes in an otherwise steady growth curve, corresponding to major milestones such as the establishment of the first permanent off-world colonies.
Overall, while the early stages of AI in space might be marked by exponential growth due to new opportunities and technological breakthroughs, the growth pattern could transition to a more steady, logarithmic, or piecewise linear trajectory as the technologies mature, regulatory frameworks are established, and the challenges of operating in space become better understood and managed.