DeepPrime

Deep learning-based model that can predict the prime editing efficiency for all possible pegRNAs for a given target sequence

About

DeepPrime is a deep-learning-based prime editing efficiency prediction tool developed in Laboratory of Genome Editing, Yonsei University.
It greatly expands upon the previous start-of-the-art pegRNA activity prediction model, DeepPE, which was limited to a specific set of edit type and length combinations.

DeepPrime was trained using 259K pegRNAs with Primer binding site (PBS) lengths ranging from 1 to 17bp, Reverse Transcription template (RTT) lengths ranging from 1 to 50bp, Edit positions ranging from +1 to +30, and editing lengths ranging from 1 to 3nt.

Specific rights, obligations, and restrictions apply to each Academic License
For commercial use of our computational models, please contact Hyosun Lee (hyosunli@yuhs.ac) or Prof. Hyongbum (Henry) Kim (hkim1@yuhs.ac).

Contact Us

If you have any questions, bug reports, or suggestions, please do not hesitate to contact us.
Theoretical aspects:
  • Prof. Hyongbum (Henry) Kim (hkim@yuhs.ac)
For other technical inquiries:
  • Prof. Hui Kwon Kim (huikwonkim@gmail.com)
  • Goosang Yu (gsyu93@gmail.com)
  • Jinman Joseph Park (josephjinpark@gmail.com)