Docs / VS Code

FAQs

Question: What languages are supported?

Answer: Ask GPT and the Context Menu features work for most common languages. Ask Codebase works for java, py, ts, js, html, cs, lua, go, php, rb, cpp, c, h, yaml, json, md, tex, swift,rs, sc, proto, rst.

Question: Is ChatGPT EasyCode free to use?

Answer: ChatGPT EasyCode is completely free to use. However, to access GPT-4, you need to buy tokens. All the other features including unlimited access to gpt-3.5-turbo are free. Pricing is subject to change.

Question: What happens to my data? Do you store my data?

Answer: We never store your code. Your data will not be used for training other AI models if you opt out of data collection. We use OpenAI to process the data. Your data does leave the machine for embedding & querying. It is retained for 30 days by OpenAI for abuse & misuse monitoring, after which it is automatically deleted. See OpenAI’s Privacy Policy.

Question: How does codebase indexing work?

Answer: At a high level, we use embeddings to create a vectorization of your codebase and use it to intelligently query GPT.

Question: Can I use my own OpenAI key?

Answer: Yes, add it in extension settings.

Question: I asked “Which GPT version are you” and it tells me it’s GPT-3, but when I ask the same question to chatGPT, it tells me it’s GPT-4. How do I know if this is really GPT-4?

Answer: GPT-4 is trained on pre-2021 data, so it shouldn't know about GPT-4, but in rare cases, it can hallucinate and tell you its GPT-4 as well. This can be validated if you have access to GPT-4 in the OpenAI playground (not chatGPT plus). The reason chatGPT tells you it’s using GPT-4 is because chatGPT has specifically configured it to answer the question that way, mostly likely through prompt engineering.

Question: Do you accept other forms of payment such as crypto or PayPal?

Answer: Not at the moment, but we may add these in the future.

Question: Why are GPT-4 tokens used up so quickly?

Answer: First, let’s make sure you understand how tokens work:

  • Each word is roughly 1.33 tokens.
  • Completion (GPT output) costs twice as much tokens as prompt (your question).
  • Follow up questions automatically include history as context, so it consumes more tokens.
  • “Ask codebase” injects relevant code from your codebase as context, so it’s the most costly.

In general, GPT-4 not cheap. We have some recommendations for saving cost:

  • Only use GPT-4 for questions that GPT-3.5 can’t handle.
  • For questions that only requires local context, select code and ask GPT instead of “asking codebase”.
  • “Asking codebase” should be used when larger codebase context is required, and with careful prompting.


background

Features Overview

background

Best Practices

background

Slash Commands

Show more

Copyright © 2024 Personabo Technologies, Inc. All rights reserved. Privacy Policy