Codex is now in the ChatGPT mobile app

(openai.com)

109 points | by mikeevans 3 hours ago

17 comments

  • reassess_blind 25 minutes ago
    Is there a native way to work remotely with a Claude/Codex on a local folder or git repo on your main machine without having to connect it to GitHub? For playing creating apps for personal use I’d rather just keep the files local.
    • barrkel 9 minutes ago
      This is what /remote-control does in Claude Code, once it's running on your main machine. You can open it up in the phone app.
    • iamjs 23 minutes ago
      I think the `/remote-control` feature does this, if I understand you correctly.
      • maille 0 minutes ago
        [delayed]
      • DonsDiscountGas 18 minutes ago
        It's supposed to. I've always found it buggy and unreliable but maybe that's just me. (This command exists in Claude btw not sure about Codex)
  • Alifatisk 2 hours ago
    Whats crazier is that Codex is free. I thought I had to pay to even try it out but nope, you can use the desktop app or cli for free, its apparently included in the free plan. You just have to sign in to your ChatGPT account.

    Of course I am aware that the caveat here is that all my interaction is part of training, but I’m fine with that. Even Qwen Cli discontinued the free plan.

    • Rover222 1 hour ago
      I think it's free for about 2 useful requests and then you have to upgrade or wait?
      • osiris970 27 minutes ago
        So basically a 20$ Claude plan lmao
        • replwoacause 14 minutes ago
          I stopped using my Claude subscription because it became so prohibitive. Back to ChatGPT and Codex full time and been pretty happy. I miss the tone/writing style of Claude, but don't miss the frustration of being told I've reached my plan limits in a comically short amount of time.
    • throwaway613746 1 hour ago
      [dead]
  • jumploops 55 minutes ago
    I’ve been using Codex from my phone for the past couple of months (through a tunnel, not this app).

    I was initially quite excited, but I’ve found the results are less than great compared to being at a keyboard.

    Something about the smaller screen size and/or lack of keyboard causes me to direct the agent less, which in turn creates more tech debt/code churn/etc.

    Maybe I’m just showing my age, and I should practice voice dictation or something more, but my thoughts flow faster and more clearly on a keyboard (less ums).

    • aiscoming 17 minutes ago
      the ums are exactly the sign that you speak much faster than you type, so you need a pause for your thoughts to catch up
    • fowlie 51 minutes ago
      I've been trying voxtype (using whisper models) lately, and to my surprise all my ums are filtered out. It's really good now actually!
  • vohk 1 hour ago
    Dang, I thought this was going to be integration for Codex Cloud, not the (still not available for Linux) Codex App. Not even Codex CLI, alas. You can still access the Cloud option from a mobile browser well enough but I prefer an app UI for poking at the things on the go.
    • tekacs 1 hour ago
      You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

      They might just not have cut a new build yet, today. It 'works' on master, but the mobile app thinks that your build is outdated (v0.0.0) if you build from master without overriding version, so probably easiest to wait until they cut a build if they haven't.

      • embedding-shape 53 minutes ago
        > You can do this from the CLI - `codex remote-control` works on Linux (I have no affiliation, just something I noticed).

        Woah, hadn't seen this before!

        Off-topic, how long compile times do people have for codex-rs in openai/codex? Even my very beefy computer takes like 30 minutes to compile in release mode, makes me wonder why it's so slow and how this TUI got so large. But then I remember, agents like to write a lot of code, compilers get slower when they have to compile a lot of code :)

        • tekacs 8 minutes ago
          Try turning off LTO. Their default codex-rs/Cargo.toml uses `lto = "fat"`, which is... expensive and slow and... you really really don't need it for a local build that you're not distributing.

          In my experience, although the build is a little slow, it's that LTO step that takes a million years.

      • vohk 1 hour ago
        Oh, that's promising, thanks! I've just been using the npm version.
      • asadm 1 hour ago
        thanks. i dont use the app and so this is cool
  • iridione 1 hour ago
    This is neat! Now I'm curious, what's left to innovate in the coding agent space? Sure there are the usual suspects like maintenance, security, reliability and other scalability improvements and looks like they will be addressed in the next year or two.
    • thornewolf 52 minutes ago
      there is something "wrong" with the ux that is hard to pin down. these things generate even text summaries more rapidly than i can read them. i need a better method for dumping info into my brain + dynamic control (if necessary)
      • ssl-3 18 minutes ago
        When I take time to read all of the output, I often find that it's mostly noise. I don't like noise so I usually don't bother.

        But a person can use subagents, if they want, to filter that down. This burns tokens in a big hurry, but I think subagents can be arbitrary local commands (eg, a local LLM).

        Or, you know: Just slow down. :) It doesn't always have to be a race, does it?

  • impulser_ 13 minutes ago
    Say what you want about OpenAI, but their software is actually pretty dam good especially compared to Anthropic and Google. Anthropic is just sloppy, and Google just doesn't live on this planet.

    Both of the Codex apps are very good.

    I tried this out and it works significantly better than Claude's remote control in fact the first few times I tried Claude's remote control it didn't even work and to this day is very buggy.

  • asadm 1 hour ago
    I use Termius on my phone to remote and make agent do stuff while i chill or am on road. This seems useful too.
  • schnitzelstoat 1 hour ago
    This is really useful for when you just need to approve plans or make small decisions.
  • tekacs 1 hour ago
    It's refreshing that unlike Anthropic's Remote Control, this actually... works.

    Feels like a testament to the value in taking time and doing it properly.

    Now if only codex got its 1M token context window back.

    ---

    Edit: Hmmm. Maybe I spoke too soon. Sigh. Definitely _more_ reliable by far overall, but still have queued messages with responses on my phone that don't show up on my computer, and responses that don't show up on my phone.

    Edit 2: New threads created from my phone seem to have a little stall-out, but ones that are underway are behaving reasonably well.

    • 20kleagues 1 hour ago
      Out of curiosity, what issues did you face with remote control on claude? I use it daily and it seems to work pretty well (bar the issues when my Mac would sleep and then the session would disconnect, but that's an issue on my end).
      • tekacs 1 hour ago
        Myriad, to be honest. I find it to just constantly be in a 'torn' state, the UI is very mushy on mobile with a lot of the affordances from desktop missing, and... it's distinctly less useful when you can't... edit, rewind, start a new thread, etc.
      • RayVR 1 hour ago
        My own experience has been that it works for about five minutes before it just disconnects or hangs. I’ve never been able to use it successfully.
  • fHr 49 minutes ago
    rust and opensource W
  • Razengan 52 minutes ago
    Codex has been great in the last 3-4 months I've been using it, almost exclusively to review existing GDScript code, and this was the feature I wanted most, because with gamedev you get the best ideas when you're out and about or in bed :)

    Claude on the other hand has been jank all around from the UX to the UI to the AI itself that it's baffling how it's more popular here on HN: https://i.imgur.com/jYawPDY.png

    Sadly this remote control feature doesn't seem to be for Mac to Mac yet? I love my MacBook Neo as a "thin client" for AI and keep the MacBook Pro at home/hotel, but haven't found a good way to share Codex desktop sessions, except maybe remote-screening over..

  • AIMC 46 minutes ago
    [flagged]
  • comment0r 1 hour ago
    [dead]
  • stavros 1 hour ago
    The best way I've found to work with LLMs is another OpenAI project, Symphony (which I implemented for Linear/GitHub and OpenCode[0]).

    It integrates with your issue tracker and makes the tracker the UI for the LLM. It also clones the repo for every ticket, and can set up fixtures/etc. I can work on multiple items at a time, which is fantastic because otherwise you have to wait for the LLMs a lot.

    [0] https://github.com/skorokithakis/symphony

  • Squab 1 hour ago
    friends, you don’t have to always be productive. leave the agent on the computer and take care of yourself.
    • jorl17 1 hour ago
      For many people, that's exactly why this is useful: less time on the computer, more time doing other things and occasionally checking in.

      In those scenarios, the goal is not "work at any time" but to "be anywhere at any time", or, rather, to "be able to work from anywhere, doing anything".

      Sort of....I guess.

  • mv4 1 hour ago
    Can someone recommend an IDE that can be used with a self-hosted model (via OpenAI or similar)?
    • aiscoming 8 minutes ago
      vs code supports local models (bring your own key/model)

      you need a model server - ollama/llama.cpp/lm studio