AI and people

This post was written by a student. It has not been fact checked or edited.

Written standpoint on AI

What artificial intelligence really is. That is one of the most difficult problems right now. Is it a great technology like Sam Altman says? Or is it bad thing because it causes the matter of global warming worse? I realized one important thing through joining Student Hub. It is that we won’t understand AI, but we should take a neutral position.

There are some good points about AI, which is developing new technologies, smoothing operations and so on. For example, people’s lives can be saved because their diseases were discovered by AI before doctors notice anomaly of the body. Also, AI allows us to focus on new ideas, such as new style of playing Go, which is fascinating.

If there is a good point, there must be some disadvantages, like biased thinking, losing jobs, or prone to extreme thinking. Few months ago, a post generated by AI, which is showing a fake image of disaster situations, effected so many people that inquiries flooded into the area believed to be in the image. That will make administration crazy.

Reading all comments in AI platform, I regard AI as one of the problems from philosophy, “What is human being?” We can spot each feature, but we can’t define it well, just like AI. The term, “AI” can be replaced with “human being” in almost all questions in Student Hub. We can’t express what is human being property. Even we don’t know about ourselves, why we can know AI, which is also a kind of brain? I suppose that is why people are arguing AI endless. It helps understanding many points of view, though.

I want to quote from Gandhi, saying “A man is but the product of his thoughts. What he thinks, he becomes.” In my Stand Point, it means “If we were too nervous to use AI, we will never see new perspectives. Also, if we were too optimistic and not considering difficulties, we will be likely to face difficulties, so we need to keep balance.” It might be tough to do that because we will discover uncharted things about AI, since AI showed public only few years ago. It is a scary thing, but it is the same thing with relationships, including seeing nice and terrible stuff of your friend.

We shouldn’t think quickly. We need to be patient, like we always do with your people around you.

Comments (3)

You must be logged in with Student Hub access to post a comment. Sign up now!

  • I think you have a very nuanced view on AI, and I agree that we should think about all perspectives and be patient before criticizing AI harshly. AI's benefits to humanity are also weighed out on its disadvantages to humanity, with how it can take jobs and make people become dependent on it. Especially AI chatbots, which make the line between AI and humans even harder to find. For a lot of people, the relationship between AI and them negatively affects the people, whether it's over dependence or losing out a job due to AI. But AI can also work alongside people for more efficiency. So when using AI, I think people should always disclose whether something was made using AI or AI assistance, so people can easily see if something isn't true, perhaps a law on it.

  • Your reflection on AI's duality is balanced, a perspective sorely needed in polarized debates. I agree that neutrality is key, while I do think AI leans a bit more into the harmful side, it's neither fully "good" nor "evil" but a mirror that reflects human ingenuity and flaws. Like you said, its potential to save lives (e.g., early disease detection) coexists with risks like misinformation or job displacement. But your angle feels the most balanced: AI's ambiguity forces us to confront what it means to be human. If we struggle to define our own self, how can we fully grasp a technology modeled after it?

    Gandhi's quote frames the need for an equilibrium. Overcaution might stop innovation (imagine dismissing the internet in the 90s), but reckless optimism ignores harm. Your analogy to human relationships is on-point, we navigate complexity by balancing trust and vigilance. AI demands the same: rigorous ethics and transparency.

    AI's energy hunger is also a problem, but it's also optimizing renewable grids or predicting environmental crises. This duality mirrors us as both planet-healers and destroyers. The old biblical tale of Cain and Abel, the Duality of Man, perfectly illustrates this point.

    You're right, patience is vital. AI's greatest lesson is humility: it exposes our limitations and urges us to collaborate so we can steer its impact. Just as relationships deepen over time, understanding AI requires learning, not snap decisions. Because the "AI debate" is really about us: our values, fears and capacity to grow.

  • I think it is a very interesting idea that we do not really understand what AI is and what human is. Now, I also find it difficult to explain what AI is, but after decades it will become more difficult. That is because AI is getting superior to human beings in some fields. For example, I read the article that AI answered correctly almost all the university entrance examination and the score of AI could allow to enter one of the most difficult universities in my country. Also, some people want AI to have the feelings like we do. So, I think AI will become much similar to humans, which makes us feel hard to tell AI from human.