DeepPrime

Deep learning-based model that can predict the prime editing efficiency for all possible pegRNAs for a given target sequence

About

DeepPrime is a deep-learning-based prime editing efficiency prediction tool developed in Laboratory of Genome Editing, Yonsei University.
It greatly expands upon the previous start-of-the-art pegRNA activity prediction model, DeepPE, which was limited to a specific set of edit type and length combinations.

DeepPrime was trained using 259K pegRNAs with Primer binding site (PBS) lengths ranging from 1 to 17bp, Reverse Transcription template (RTT) lengths ranging from 1 to 50bp, Edit positions ranging from +1 to +30, and editing lengths ranging from 1 to 3nt.

Specific rights, obligations, and restrictions apply to each Academic License
For commercial use of our computational models, please contact Hyosun Lee ([email protected]) or Prof. Hyongbum (Henry) Kim ([email protected]).

Alternative model

PRIDICT2.0

A prime editing efficiency prediction model trained on data from HEK293T or K562 cell lines (Mathis et al., Nature Biotechnology, 2024). This model may be useful for cross-validating long edit types.
Access: Web tool / GitHub

Contact Us

If you have any questions, bug reports, or suggestions, please do not hesitate to contact us.
Theoretical aspects:
For other technical inquiries: