User Controls

AI general Thread

  1. the man who put it in my hood Black Hole [miraculously counterclaim my golf]
    This is a smart investment by the government, should have done this 10 years ago. I expect robot AI police and government workers and they will work for free and save the country billions. Welcome to the future

    https://www.ctvnews.ca/politics/government-chatbots-it-s-one-possibility-under-ottawa-s-new-ai-strategy-1.6979892
  2. the man who put it in my hood Black Hole [miraculously counterclaim my golf]
    https://cryptoslate.com/ai-model-generates-real-time-playable-doom-with-no-game-engine/

    AI model generates real-time playable DOOM with no game engine
    GameNGen can simulate the classic game DOOM at over 20 frames per second, achieving visual quality comparable to the original game.

    AI model generates real-time playable DOOM with no game engine
    Cover art/illustration via CryptoSlate.

    GameNGen, a neural model-based game engine, is demonstrating the potential to revolutionize how video games are generated and played. An innovative approach developed by Google Research and Tel Aviv University researchers allows for real-time interaction with complex gaming environments without relying on traditional game engines.

    As the authors reported, GameNGen can simulate the classic game DOOM at over 20 frames per second, achieving visual quality comparable to the original game.



    The core of GameNGen’s functionality lies in its use of diffusion models, a type of generative AI that has become a standard in media generation. The process begins with training a reinforcement learning (RL) agent to play the game, recording its actions and observations. This data is then used to train a diffusion model to predict the next frame based on a sequence of past frames and actions. This method allows the model to simulate complex game state updates, such as managing health and ammo, attacking enemies, and interacting with the environment over long trajectories.

  3. this is shit.
    The following users say it would be alright if the author of this post didn't die in a fire!
  4. the man who put it in my hood Black Hole [miraculously counterclaim my golf]
    ΩꙊΩ ERECTUS: MACHINA GENERATED CONTENTUS UTERE CAUTE 鵈墸
    https://www.holyseegeneva.org/statements/2nd-session-of-the-group-of-governmental-experts-gge-on-emerging-technologies-in-the-area-of-lethal-autonomous-weapons-laws/
    https://meetings.unoda.org/ccw-/convention-on-certain-conventional-weapons-group-of-governmental-experts-on-lethal-autonomous-weapons-systems-2024
    https://www.catholicnewsagency.com/news/258241/pope-francis-tells-ai-leaders-no-machine-should-ever-choose-to-take-human-life
    https://www.catholicnewsagency.com/news/248605/vatican-swarms-of-kamikaze-mini-drones-pose-threat-to-civilians
    https://www.catholicnewsagency.com/news/40009/holy-see-renews-appeal-to-ban-killer-robots
    Statement of H.E. Ettore Balestrero, Permanent Observer to the United Nations and Other International Organizations in Geneva
    to the Second Session of the 2024 Group of Governmental Experts (GGE) on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS)
    General Exchange of Views
    Geneva, 26 August 2024
    Mr. Chair,
    At the outset, please allow me to thank you for all the preparatory work that you have conducted in advance of this second session of the Group of Governmental Experts (GGE). In particular, my Delegation wishes to thank you for the “rolling text” that you have provided, which constitutes a valuable foundation upon which to build a shared understanding.

    Speaking to the G7 leaders gathered in Italy last June, Pope Francis urged them to “reconsider the development and use of devices like the so-called “lethal autonomous weapons” and ultimately ban their use. This starts from an effective and concrete commitment to introduce ever greater and proper human control. No machine should ever choose to take the life of a human being.”[1]

    For the Holy See, given the pace of technological advancements and the research on weaponization of artificial intelligence, it is of the utmost urgency to deliver concrete results in the form of a solid legally binding instrument and in the meantime to establish an immediate moratorium on their development and use. In this regard, it is profoundly distressing that, adding to the suffering caused by armed conflicts, the battlefields are also becoming testing grounds for more and more sophisticated weapons.
    Mr. Chair,
    This Delegation supports your approach to analyze the potential functions and technological aspects of autonomous weapon systems. Identifying those systems that are wholly or partially incompatible with IHL and other existing international obligations, could be of great benefit in adequately characterizing the systems under consideration in order to establish prohibitions and restrictions accordingly, while taking into account broader ethical considerations.

    For the Holy See, autonomous weapons systems cannot be considered as morally responsible entities. The human person, endowed with reason, possesses a unique capacity for moral judgement and ethical decision-making that cannot be replicated by any set of algorithms, no matter how complex.[2] Therefore, this Delegation appreciates the references to both “appropriate control” and “human judgement” in your rolling text, although we would welcome more clarity and common understanding of these terms.

    In this regard, it is useful to recall the difference between a “choice” and a “decision”. While pointing out that machines merely produce technical algorithmic choices, Pope Francis recalled that “human beings, however, not only choose, but in their hearts are capable of deciding. A decision is what we might call a more strategic element of a choice and demands a practical evaluation […] Moreover, an ethical decision is one that takes into account not only an action’s outcomes but also the values at stake and the duties that derive from those values.”[3]
    Mr. Chair,
    The Holy See deems it of fundamental importance to retain references to human dignity and ethical considerations at the core of our deliberations. It is necessary “to ensure and safeguard a space for proper human control over the choices made by artificial intelligence programs: human dignity itself depends on it.”[4]

    In this regard, this Delegation welcomes the prominent role given to ethical considerations at the recent conference “Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation” which was held in Vienna on 29-30 April 2024. This and other similar conferences on the same subject are further indications of an ever-growing awareness of the ethical concerns raised by the weaponization of AI. Such public awareness represents a remarkable, ever-growing “conscience publique” that cannot be ignored.

    In conclusion, the development of ever more sophisticated weapons is certainly not the solution. The undoubted benefits that humanity will be able to draw from the current technological progress will depend on the degree to which such progress is accompanied by an adequate development of responsibility and values that place technological advancements at the service of integral human development and of the common good [5].
    Thank you, Mr. Chair.
    [1] Pope Francis, Address to the G7 Session on Artificial Intelligence, Borgo Egnazia, Italy, 14 June 2024.
    [2] Cf. Document CCW/CONF.VI/WP.3, “Translating Ethical Concerns into a Normative and Operational Framework for Lethal Autonomous Weapons Systems”, submitted by the Holy See to the Sixth Review Conference of the CCW, 13-17 December 2021.
    [3] Pope Francis, Address to the G7 Session on Artificial Intelligence, Borgo Egnazia, Italy, 14 June 2024.
    [4] Ibid.
    [5] Cf. Pope Francis, Laudato Si’: Encyclical Letter On Care For Our Common Home, n. 105.
    聲明巴萊斯特羅 (Ettore Balestrero),常駐日內瓦聯合國和其他國際組織觀察員 出席 2024 年致命自主武器系統 (LAWS) 領域新興技術政府專家小組 (GGE) 第二次會議一般性交換意見 b2024 年 8 月 26 日,日內瓦 主席先生,首先,請容許我感謝你們在政府專家小組第二次會議之前所做的一切準備。我國代表團特別感謝您提供的“滾動案文”,它構成了建立共同諒解的寶貴基礎。去年六月在義大利舉行的七國集團領導人演講中,教宗方濟各敦促他們「重新考慮所謂「致命自主武器」等設備的開發和使用,並最終禁止其使用。這始於有效且具體的承諾,即引入更大、更適當的人類控制。任何機器都不應該選擇奪走人類的生命。
    對於教廷來說,鑑於技術進步和人工智慧武器化研究的步伐,最迫切的是以具有法律約束力的可靠文書的形式取得具體成果,同時立即暫停其研究開發和使用。在這方面,令人深感痛心的是,除了武裝衝突造成的痛苦之外,戰場還成為越來越多尖端武器的試驗場。主席先生,
    該代表團支持你們分析自主武器系統的潛在功能和技術方面的方法。確定那些完全或部分不符合國際人道法和其他現有國際義務的製度,對於充分描述所考慮的製度的特徵,以便制定相應的禁止和限制措施,同時考慮到更廣泛的道德考慮,可能大有裨益。
    對教廷來說,自主武器系統不能被視為具有道德責任的實體。人類被賦予了理性,擁有獨特的道德判斷和道德決策能力,任何演算法都無法複製這種能力,無論多麼複雜。因此,該代表團讚賞您的滾動案文中提到“適當控制”和“人為判斷”,儘管我們歡迎對這些術語更加明確和達成共識。
    在這方面,回顧一下「選擇」和「決定」之間的差異是有用的。教宗方濟各在指出機器僅產生技術演算法選擇的同時,回顧:「然而,人類不僅有選擇,而且在內心有能力做出決定。決策是我們所謂的更具策略性的選擇要素,需要進行實際評估[…]此外,道德決策不僅考慮行動的結果,還考慮所涉及的價值觀和由此產生的責任。 ]
    主席先生,教廷認為,在我們的審議核心中保留對人類尊嚴和道德考慮的提及至關重要。有必要「確保和維護人類對人工智慧程式所做的選擇進行適當控制的空間:人類尊嚴本身取決於它。」[4] 在這方面,本代表團歡迎最近於2024 年4 月29 日至30 日在維也納舉行的「人類處於十字路口:自主武器系統和監管挑戰」會議上對道德考慮所發揮的突出作用。和其他類似會議關於同一主題的進一步表明,人們對人工智慧武器化所引發的道德問題的認識不斷增強。這種公眾意識代表著一種引人注目的、不斷增長的“公眾良心”,不容忽視。總之,發展更先進的武器絕對不是解決辦法。人類能夠從當前技術進步中獲得的毫無疑問的好處將取決於這種進步在多大程度上伴隨著責任和價值觀的充分發展,使技術進步服務於人類整體發展和共同利益[5]。謝謝主席先生。[1] 教宗方濟各,在七國集團人工智慧會議上的講話,義大利博爾戈·埃格納齊亞,2024 年 6 月 14 日。[2] 參見。文件CCW/CONF.VI/WP.3,“將道德關切轉化為致命性自主武器系統的規範和操作框架”,由羅馬教廷提交給2021 年12 月13 日至17 日舉行的《特定常規武器公約》第六次審查會議。[3] 教宗方濟各,在七國集團人工智慧會議上的講話,義大利博爾戈·埃格納齊亞,2024 年 6 月 14 日。
    [4]同上。[5] 參見。教宗方濟各,《願你受讚頌》:關於關心我們共同家園的通諭,n。 105.
    The following users say it would be alright if the author of this post didn't die in a fire!
  5. the man who put it in my hood Black Hole [miraculously counterclaim my golf]
    my AI minecraft run
Jump to Top