Test generation
Main Ideaβ
- Help the developer with the low value action of generating a test
Investigation Planβ
-
Can we use OpenAI to generate the tests ?
=> Yes, but we need more context about the component to test
-
How can we get more context about the component to test from his sub-components and functions ?
- Use the AST with babel parser to explore the sub-components of the components to test
- Use the AST wit typescript lib to explore sub files of files to test *
- ADR - Explore AST with babel vs typescript
=> Yes
-
Can we improve generated tests by giving more context with sub-files?
=> Yes
-
Can we improve the generated tests by tweaking the prompt?
=> Yes
Extrasβ
How to evaluate results?β
-
Context: It is hard to say if a an amelioration to the test generation is an upgrade or a downgrade.
-
Learnings:
- Evaluation of the test generation
- //todo similarity
- //todo try to run it
How to parse the output?β
- Context: When we ask openAI to give us some code we have to parse it manually. And sometimes it can also add some text and it breaks our parser.
- Learning: Clean the model output
- Next Steps: Clean the output for the other EPIC (code generation)
How to deal with big prompts?β
- Context: The prompt to generate tests is often too big for openAI.$
- Learning: Extract dev standards from test code examples
- Next Steps: Try improving the standards with iterations
Learningsβ
ποΈ ADR - Explore AST with babel vs typescript
A comparison between AST and typescript for parsing files.
ποΈ Clean the model output
Problem
ποΈ Evaluation of the test generation
Problem
ποΈ Extract dev standards from test code examples
Problem
ποΈ How to improve the prompt
Strategy
ποΈ Use openAI to generate the tests
Parent project Hub Test Generation
ποΈ Use sub-files to add more context to the prompt
Strategy
ποΈ Use the AST wit typescript lib to explore sub files of files to test
Strategy
ποΈ Use the AST with @babel_parser to explore the sub-components of the components to test
Strategy