Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing
    • Website
      • Tasks
      • HuggingChat
      • Collections
      • Languages
      • Organizations
    • Community
      • Blog
      • Posts
      • Daily Papers
      • Learn
      • Discord
      • Forum
      • GitHub
    • Solutions
      • Team & Enterprise
      • Hugging Face PRO
      • Enterprise Support
      • Inference Providers
      • Inference Endpoints
      • Storage Buckets

  • Log In
  • Sign Up

qihoo360
/
FancyVideo

Image-to-Video
Diffusers
Safetensors
English
fancyvideo
video-generation
text-to-video
Model card Files Files and versions
xet
Community
2

Instructions to use qihoo360/FancyVideo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use qihoo360/FancyVideo with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    from diffusers.utils import load_image, export_to_video
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("qihoo360/FancyVideo", dtype=torch.bfloat16, device_map="cuda")
    pipe.to("cuda")
    
    prompt = "A man with short gray hair plays a red electric guitar."
    image = load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
    )
    
    output = pipe(image=image, prompt=prompt).frames[0]
    export_to_video(output, "output.mp4")
  • Notebooks
  • Google Colab
  • Kaggle
FancyVideo
31.4 GB
Ctrl+K
Ctrl+K
  • 3 contributors
History: 10 commits
maao
Merge branch 'main' of hf.co:qihoo360/FancyVideo into main
4b4b3e8 over 1 year ago
  • resources
    update 125_frames & video_extending & video_backtracking models over 1 year ago
  • .gitattributes
    1.52 kB
    initial commit over 1 year ago
  • README.md
    1.26 kB
    Update README.md over 1 year ago