CasparCG in cloud (AWS) infrastructure and/or Docker

#1
Hi all, thanks as always for the wealth of info on here.

I know some people have had success deploying Caspar on AWS although info is sparse. I am looking to do the same -
IP video in and out - and to also containerise it within Docker for scalability.

I thought it would be worth starting a thread to pool resources and knowledge on this subject. Any and all pointers and experiences welcome.

Cheers.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#2
First questions first:

1) who has attempted/achieved either containerising Caspar or deploying it to cloud infrastructure and with what success?

2) I see from the ChangeLog for CasparCG 2.1.0 Beta 1:

Code: Select all

  o Major code refactoring:
    + Mixer abstraction so different implementations can be created. Currently
      CPU mixer and GPU mixer (previously the usage of the GPU was mandatory)
      exists.
Am I correct that this means Caspar could be deployed on a CPU-only instance and so we no longer need to be looking at (for example) AWS's nVidia instances to provide GPU acceleration? Are there any drawbacks of using it as CPU only, other than the obvious high cost/performance CPU requirements?

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#3
Maybe you can explain again:

1. Did you mean is CasparCG Server or web client connected to CasparCG Server ??

2. The final output of the CasparCG Server is a video overlay or something else ??

As I know, for output as a video overlay, CasparCG Server needs decklink card besides the powerful GPU and CPU, the final outcome via decklink,

I think the big problem until now, I have never heard of Cloud Server service that gives feature decklink card or Video overlay output feature :)

Cheers,
Sonny

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#4
sonny_xny wrote:As I know, for output as a video overlay, CasparCG Server needs decklink card besides the powerful GPU and CPU, the final outcome via decklink,
That is not the case. The output could also be NDI or a webstream. For that to work no Decklink card is required.

Until now (2.0.7) the GPU was necessary. From 2.1 on it seems to work with a CPU mixer also (I never tried). But to be fast I would recomend the use of a GPU anyway. There are also AWS subscriptions with nVidia GPU support available.
Didi Kunz
CasparCG Client-Programmer, Template Maker & Live CG-Operator
Media Support, CH-5722 Gränichen, Switzerland http://mediasupport.ch/
Problems? Guide to posting Bug reports & Feature requests

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#5
Thanks for the replies.

Didikunz is correct, output can be a multitude of formats, not necessarily requiring hardware. In my case, no hardware is required at all. IP in, IP out.

Good to hear that my assumptions re. GPU were correct. Yes, AWS has nVidia instances but the plan is to deploy within a scaleable Docker container which will add more CPU resources (instances) as required. nVidia have implemented a Docker structure which exposes hardware GPU to the containers but it seems much more complex than with CPU only (i.e. 'normal' Docker). If the system is scaleable then the amount of available CPU resources is practically limitless. My gut feeling is that running multiple CPU instances will still be far cheaper than running fewer GPU instances.

The question is whether CasparCG within scaleable Docker is possible.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#9
In my testing, a running x server is still needed for caspar to get an opengl context. Appears to be a limitation in the SFML graphics library. The log shows that it fails to access the display when initialing the screen module (even with no screen consumers enabled in my config). Perhaps if it was compiled without the screen module enabled it might get further?

Also, from looking in https://github.com/CasparCG/Server/blob ... erator.cpp, the includes suggest that the cpu mixer is not available on linux. I can't dig into this any further to confirm currently.

Code: Select all

#ifdef _MSC_VER
#include "cpu/image/image_mixer.h"
#endif
#include "ogl/image/image_mixer.h"
#include "ogl/util/device.h"

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#10
My experience was exactly the same - I disabled the screen consumer in the config yet still had an error in the logs. This is why I asked the question... I thought I must be doing something wrong! Good to know there is a workaround although I'm wondering what the impact may be on resources - I don't like the idea of running xfvb as an 'output' which is not actually an output. If it has minimal impact then fine but otherwise I wonder if we would benefit from a Headless Caspar which, as you say, has the Screen Consumer removed.

Your comments regarding CPU Mixer being unavailable on Linux is more of a concern. Can anyone confirm one way or another?

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#11
I have just tested 2.1 Beta 2 on both windows and linux with the accelerator set to both cpu and gpu.
I used the same config of one channel with no consumers, and played the same 2 video layers in all tests.

On windows the log indicated what accelerator mode was being used with one of the following

Code: Select all

Initialized Streaming SIMD Extensions Accelerated CPU Image Mixer for channel 1 
Initialized OpenGL Accelerated GPU Image Mixer for channel 1
Utilisation was as follows:
CPU Mode: 2% GPU, 75% CPU
GPU Mode: 15% GPU, 24% CPU

On linux, the gpu mode worked as expected with 33% GPU, 19% CPU.
I was unable to run in cpu mode as caspar terminated with the following error at the point where it would normally print which accelerator was being used

Code: Select all

(casparcg.config: /configuration/channels/channel[1]). Please check the configuration file (casparcg.config) for errors.
In regard to overhead in running xfvb, it is a virtual framebuffer meaning that it runs the display cpu rather than gpu accelerated. So in theory it should work, I do not know what performance would be like in doing this though.
Last edited by julusian on 19 Jun 2017, 12:59, edited 1 time in total.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#12
Thank you for running those tests, great work and really useful.

Shame the CPU mode isn't implemented on Linux (yet?). Looks like we are indeed looking at AWS's GPU instances for now. I'm really hoping to be able to use it in CPU mode though; this, combined with Docker, would mean that CasparCG would be cloud service agnostic, running the same on any service and with single click install. I think that's an objective worth aiming for. Anyone know of a roadmap for CPU mode on Linux?

BTW:
julusian wrote:Utilisation was as follows:
CPU Mode: 2% GPU, 75% GPU
GPU Mode: 15% GPU, 24% GPU
I assumed that the second column is CPU. Thought you might want to edit to help others understand your results :)

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#13
Good spot on that typo, that is fixed.

I've had a very quick look at the history and it appears that hellgore disabled the cpu mixer for linux, right around the time linux support was added. If I reenable it I get a bunch of errors in accelerator/cpu/util/xmm.h, which appear to be due to differences between the windows and linux compilers. It shouldnt be too bad to fix up with a little time I expect, but my knowledge of c++ is limited especially the sse intrinsics that file is heavily using so I could be wrong.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#16
It looks like removing the requirement for a running xserver is not currently possible.
I have managed to do so (https://github.com/Julusian/CasparCG-Se ... less-linux) but had to remove the html producer. It appears that the version of chromium in use requires x11 to initialise itself. I have tested with a newer version of chromium and there was no change.

I am now able to launch caspar over ssh, and play a looping video out to a decklink with no visible stutter. It works with both the cpu and gpu accelerators.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#18
hi there;

I've been playing with CasparCG in the cloud since the 1.7 Windows-only version; first in AWS and then with the first Linux builds of 2.1.0 in MS Azure machines.
Currently working on Azure machines with GPU but also testing CPU only servers. Performances for my use cases (3 channels in HD + 1 channel in SD) have been far better with the GPU instances.

Ubuntu 16.04 working fine with a headless X11 configuration, regular CasparCG build manually from the official git repo.
Challenges have been to have the processes start automatically upon boot of the server with the nvidia config heading to the right pci.

Re: CasparCG in cloud (AWS) infrastructure and/or Docker

#19
In the past week Sesse made a contribution to the cpu mixer that makes it up to 5x faster!

We did an experiment with GPU + Google Cloud Platform last year, but the Tesla GPU's aren't easy sailing when it comes to CasparCG. They are very finnicky to get set up.

What is pricing like for such a system, running in either Azure or AWS? And what is your output like? MPEGTS over udp, rtmp, files?
CasparCG enthusiast and broadcast geek
cron