Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»AI Tools»How to Use Gemini 3 Pro Efficiently
    AI Tools

    How to Use Gemini 3 Pro Efficiently

    AwaisBy AwaisNovember 20, 2025No Comments8 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    How to Use Gemini 3 Pro Efficiently
    Share
    Facebook Twitter LinkedIn Pinterest Email

    its latest LLM: Gemini 3. The model is long-awaited and has been widely discussed before its release. In this article, I’ll cover my first experience with the model and how it differs from other frontier LLMs.

    The goal of the article is to share my first impressions when using Gemini 3, highlighting what works well and what doesn’t work well. I’ll highlight my experience using Gemini 3 in the console and while coding with it.

    Learn the pros and cons of Gemini 3 Pro, from testing with both coding and console usage
    This infographic highlights the main contents of this article. I’ll discuss my first impressions using Gemini 3, both through the Gemini console and from coding with it. I’ll highlight what I like about the model and the parts I dislike. Image by ChatGPT.

    Why you should use Gemini 3

    In my opinion, Gemini 2.5 pro was already the best conversational LLM available before the release of Gemini 3. The only area I believe another LLM was better at was Claude Sonnet 4.5 thinking, for coding.

    The reason I believe Gemini 2.5 pro is the best non-coding LLM is due to its:

    • Ability to efficiently find the correct information
    • Low amount of hallucinations
    • Its willingness to disagree with me

    I believe the last point is the most important. Some people want warm LLMs that feel good to talk to; however, I’d argue you (as a problem-solver) always want the opposite:

    You want an LLM that goes straight to the point and is willing to say that you are wrong

    My experience was that Gemini 2.5 was far better at this, compared to other LLMs such as GPT-5, Grok 4, and Claude Sonnet 4.5.

    Considering Google, in my opinion, already had the best LLM out there, the release of a newer Gemini model is thus very interesting, and something I started testing right after release.


    It’s worth pointing out that Google released Gemini 3 Pro, but has not yet released a flash alternative, though it’s natural to think such a model will be released soon.

    I’m not endorsed by Google in the writing of this article.

    Gemini 3 in the console

    I first started testing Gemini 3 Pro in the console. The first thing that struck me was that it was relatively slow compared to Gemini 2.5 Pro. However, this is usually not an issue, as I mostly value intelligence over speed, of course, up to a certain threshold. Though Gemini 3 Pro is slower, I definitely wouldn’t say it’s too slow.

    Another point I noticed is that when explaining, Gemini 3 creates or utilises more photos in its explanations. For example, when discussing EPC certificates with Gemini, the model found the image below:

    This is an image of Gemini 3 Pro, which I used to answer my questions about EPC certificates. Image by Gemini 3 Pro

    I also noticed it would sometimes generate images, even if I didn’t explicitly prompt for it. The image generation in the Gemini console is surprisingly fast.


    The moment I was most impressed by Gemini 3’s capabilities was when I was analyzing the first research paper on diffusion models, and I discussed with Gemini to understand the paper. The model was, of course, good at reading the paper, including text, images, and equations; however, this is also a capability the other frontier models possess. I was most impressed when I was chatting with Gemini 3 about diffusion models, trying to understand them.

    I made a misconception about the paper, thinking we were discussing conditional diffusion models, though we were in fact looking at unconditional diffusion. Note that I was discussing this before even knowing about the terms conditional and unconditional diffusion.

    Gemini 3 then proceeded to call out that I was misunderstanding the concepts, efficiently understanding the real intent behind my question, and significantly helped me deepen my understanding of diffusion models.

    This image highlights a good interaction with Gemini 3 Pro, where the model understood where I was misunderstanding the topic at hand and called it out. Being able to call out things like this is an important trait for LLMs, in my opinion. Image from Gemini.

    I also took some of the older queries I ran in the Gemini console with Gemini 2.5 Pro, and ran the exact same queries again, this time using Gemini 3 Pro. They were usually broader questions, though not particularly difficult ones.

    The responses I got were overall quite similar, though I did notice Gemini 3 was better at telling me things I didn’t know, or uncovering topics / areas I (or Gemini 2.5 Pro) hadn’t thought about before. I was, for example, discussing how I write articles, and what I can do to improve, where I believe Gemini 3 was better at providing feedback, and coming up with more creative approaches to improving my writing.


    Thus, to sum it up, Gemini 3 in the console is:

    • A bit slow
    • Smart, and provides good explanations
    • Good at uncovering things I haven’t thought about, which is super helpful when dealing with problem-solving
    • Is willing to disagree with you, and help call out ambiguities, traits I believe are really important in an LLM assistant

    Coding with Gemini 3

    After working with Gemini 3 in the console, I started coding with it through Cursor. My overall experience is that it’s definitely a good model, though I still prefer Claude Sonnet 4.5 thinking as my main coding model. The main reason for this is that Gemini 3 too often comes up with more complex solutions and is a slower model. However, Gemini 3 is most definitely a very capable coding model that might be better for other coding use-cases than what I’m dealing with. I’m mostly coding infrastructure around AI agents and CDK stacks.

    I tried Gemini 3 for coding in two main ways:

    • Making the game shown in this X post, from just a screenshot of the game
    • Coding some agentic infrastructure

    First, I attempted to make the Game from the X post. On the first prompt, the model made a Pygame with all the squares, but it forgot to add all the sprites (art), the bar on the left side, and so on. Basically, it made a very minimalist version of the game.

    I then wrote a follow-up prompt with the following:

    Make it look properly like this game  with the design and everything. Use

    Note: When coding, you should be way more specific in your instructions than my prompt above. I used this prompt because I was essentially vibing in the game, and wanted to see Gemini 3 Pro’s ability to create a game from scratch.

    After running the prompt above, it made a working game, where the guests are walking around, I can buy pavements and different machines, and the game essentially works as expected. Very impressive!


    I continued coding with Gemini 3, but this time on a more production-grade code base. My overall conclusion is that Gemini 3 Pro usually gets the job done, though I more often experience bloated or worse code than I do when using Claude 4.5 Sonnet. Additionally, Claude Sonnet 4.5 is quite a bit faster, making it the definite model of choice for me when coding. However, I would probably regard Gemini 3 Pro as the second-best coding model I’ve used.

    I also think that which coding model is best highly depends on what you’re coding. In some situations, speed is more important. In particular forms of coding, another model might be better, and so on, so you should really try out the models yourself and see what works best for you. The price of using these models is going down rapidly, and you can easily revert any changes made, making it super cheap to test out different models.

    It’s also worth mentioning that Google released a new IDE called Antigravity, though I haven’t tried it yet.

    Overall impressions

    My overall impression of Gemini 3 is good, and my updated LLM usage stack will look like this:

    • Claude 4.5 Sonnet thinking for coding
    • GPT-5 when I need quick answers to simple questions (the GPT-app works well to open with a shortcut).
    • GPT-5 when generating images
    • When I want more thorough answers and have longer discussions with an LLM about a topic, I’ll use Gemini 3. Typically, to learn new topics, discuss software architecture, or similar.

    The pricing for Gemini 3 per million tokens looks like the following (per November 19, 2025, from Gemini Developer API Docs)

    • If you have less than 200k input tokens:
      • Input tokens: 2 USD
      • Output tokens: 12 USD
    • If you have more than 200k input tokens:
      • Input tokens: 4 USD
      • Output tokens: 18 USD

    In conclusion, I have good first impressions from Gemini 3, and highly recommend checking it out.

    👉 Find me on socials:

    💻 My webinar on Vision Language Models

    📩 Subscribe to my newsletter

    🧑‍💻 Get in touch

    🔗 LinkedIn

    🐦 X / Twitter

    ✍️ Medium

    You can also read my other articles:

    Efficiently Gemini Pro
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    Dyadic: A Scalable Platform for Human-Human and Human-AI Conversation Research

    March 24, 2026

    The Complete Guide to AI Implementation for Chief Data & AI Officers in 2026

    March 24, 2026

    [2503.13401] Levels of Analysis for Large Language Models

    March 24, 2026

    Biological Mechanisms, Computational Approaches, and Future Opportunities

    March 24, 2026

    I Built a Podcast Clipping App in One Weekend Using Vibe Coding

    March 24, 2026

    Rethinking Diffusion Inverse Problems with Decoupled Posterior Annealing

    March 24, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    Why better signals drive paid search performance

    March 24, 2026

    In an increasingly automated environment, paid search performance is constrained by a simple reality: Algorithms…

    Dyadic: A Scalable Platform for Human-Human and Human-AI Conversation Research

    March 24, 2026

    How To Determine What Paid Media Channels Are Right for You

    March 24, 2026

    The Complete Guide to AI Implementation for Chief Data & AI Officers in 2026

    March 24, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Why zero-click search doesn’t mean zero influence

    March 24, 2026

    [2503.13401] Levels of Analysis for Large Language Models

    March 24, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.