8000 naming convention for plugins / capabilities · Issue #781 · containerd/containerd · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

naming convention for plugins / capabilities #781

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
AkihiroSuda opened this issue Apr 28, 2017 · 21 comments
Closed

naming convention for plugins / capabilities #781

AkihiroSuda opened this issue Apr 28, 2017 · 21 comments

Comments

@AkihiroSuda
Copy link
Member
AkihiroSuda commented Apr 28, 2017

Update: capability seems better than plugins: #781 (comment)


Available plugins in the daemon will be detectable via GRPC #776

It would be better to consider plugin naming convention now.

e.g.

  • current: content-grpc
  • proposal 1: io.containerd.plugins.grpc.content
  • proposal 2: io.containerd.plugins.grpc.content.v1

I prefer 1, but maybe 2 is preferrable for realism?
@stevvooe @samuelkarp

@stevvooe
Copy link
Member

@AkihiroSuda I think either of these proposals will require us to do per service versioning, as opposed to versioning of the entire package, which is what we do now. I'd prefer we match these to the protobuf packages, if possible, but we'd have to add that io portion.

@AkihiroSuda
Copy link
Member Author

How about non-grpc plugins?

@stevvooe
Copy link
Member
stevvooe commented May 1, 2017

@AkihiroSuda io.containerd.plugins.foo?

@samuelkarp
Copy link
Member

Hey @AkihiroSuda, thanks for pinging me. I've been thinking about this and I don't think this is the right abstraction for my needs. Like I mentioned in #776 (comment), I think that higher level orchestrators really are going to care about what they can do with containerd (capabilities) rather than how it's implemented (specific plugins with specific versions). This is important because orchestrators may need to choose where to place a given container on a node in the cluster because of what the container requires (e.g. a container that should share a pid namespace with the host should run on a node that has that capability). Just exposing the list of plugins (or services, etc) would require an ochestrator to maintain a mapping of plugin-to-capability internally, and to keep track of what functionality changes based on version as well. With Amazon ECS, we need to maintain this type of mapping today for Docker features, but we use a capability system for features of our agent (on-host component).

If we look at modeling capabilities instead of (or in addition to) plugins, we could have ones like this:

  • Retrieve content (images) in Docker's format from a Docker V2 Registry
  • Retrieve content (images) in OCI format from a Docker V2 Registry
  • Authenticate with a Docker V2 Registry
  • Unpack content into a Snapshotter using overlay
  • Unpack content into a Snapshotter using btrfs
  • Create a container from an OCI bundle
  • Manage container lifecycle
  • Pause/resume containers
  • Enforce limits in the memory cgroup (with/without systemd)
  • Enforce limits in the cpu cgroup (with/without systemd)
  • Enforce limits in the blkio cgroup (with/without systemd)
  • Place containers inside an arbitrary cgroup hierarchy (with/without systemd)
  • Share/unshare a network namespace
  • Share/unshare a uts namespace
  • Share/unshare a mount namespace
  • Set SELinux labels
  • Set AppArmor profiles
  • etc...

@Random-Liu, any thoughts on this from a k8s perspective?

@AkihiroSuda AkihiroSuda changed the title naming convention for plugins naming convention for plugins / capabilities May 6, 2017
@AkihiroSuda
Copy link
Member Author

Makes sense, how about the following convention

Image Format (decoupled from distribution method):

  • io.containerd.cap.image.docker.v2.2
  • io.containerd.cap.image.oci.v1
  • (com.example.containerd.cap.image.3rdpartyimage.v42)

Image Distribution:

  • io.containerd.cap.registry.docker.v2 (when this is used as a map key, the map value may contain its supported image format caps)

Snapshotter:

  • io.containerd.cap.snapshot.overlay (do we want .v1 suffix here?)
  • io.containerd.cap.snapshot.btfs
  • io.containerd.cap.snapshot.naive
  • io.containerd.cap.snapshot.windows

Linux lifecycle:

  • io.containerd.cap.runtime.linux
  • io.containerd.cap.runtime.linux.pause-resume
  • io.containerd.cap.runtime.linux.criu

Linux runtime config spec:

  • io.containerd.cap.runtime.linux.spec.oci.v1

Linux cgroups:

  • io.containerd.cap.runtime.linux.cgroups.memory
  • io.containerd.cap.runtime.linux.cgroups.systemd.memory (or ...memory.systemd maybe)
  • io.containerd.cap.runtime.linux.cgroups.cpu
  • io.containerd.cap.runtime.linux.cgroups.systemd.cpu
  • io.containerd.cap.runtime.linux.cgroups.blkio
  • io.containerd.cap.runtime.linux.cgroups.systemd.blkio
  • io.containerd.cap.runtime.linux.cgroups.arbitrary-hierarchy
  • io.containerd.cap.runtime.linux.cgroups.systemd.arbitrary-hierarchy

Linux namespace:

  • io.containerd.cap.runtime.linux.ns.net
  • io.containerd.cap.runtime.linux.ns.uts
  • io.containerd.cap.runtime.linux.ns.mount

Linux security:

  • io.containerd.cap.runtime.linux.security.selinux
  • io.containerd.cap.runtime.linux.security.apparmor

Windows lifecycle:

  • io.containerd.cap.runtime.windows
  • io.containerd.cap.runtime.windows.pause-resume

@crosbymichael
Copy link
Member

We need to be careful how much policy we add to containerd. We don't want a lot of validation for fulfilling system level things. We just try to fulfill the users request. Validation and policy should be handled above containerd.

It is a lot of code to check for these types of things and it adds a lot of complexity.

@crosbymichael
Copy link
Member

The cap ideas look good but I think we need to start with a client side package that can be imported in the caller and do the checks client side. If we feel confident in the implementation and that we have the use cases figured out, we can move it to a GRPC service.

We can design the package where we have some built in checks but also allow the client to register more. If we start with host level checks it could look something like:

system.RegisterCheck("apparmor", func() (bool, error) {
    data, err := readFile("/sys/kernel/apparmor/enabled")
    if err { return false, err }
    return data == "Y", nil
})

for name, supported := range system.Checks() {
   if name == "apparmor" && supported {
          // generate profile
    }
}
// or 
checks := system.Checks()
if checks["apparmor"] {
     // gen profile
}

I'm fine with a more advanced naming schema but it would be nice to keep it string : bool and not try to return some type of string data that needs parsed further.

What do you all think?

@crosbymichael
Copy link
Member

Also we don't have a way to inspect the runtimes ( no API in OCI ) and i'm not sure this is something we want to expose either. It makes me really nervous that you somehow need this type of information to run containers and you do these checks at runtime. You have to have some type of compatibility of things that you support and what you don't.

Things like io.containerd.cap.runtime.linux.pause-resume seem really out of scope for a check. Just call the RPC, if it does not support it you will get a failure saying "X does not support pausing a container". It will end up being more code just to check things like this than to actually implement the functionality in containerd. Just make the call and let it fail.

@samuelkarp
Copy link
Member

Things like io.containerd.cap.runtime.linux.pause-resume seem really out of scope for a check. Just call the RPC, if it does not support it you will get a failure saying "X does not support pausing a container". It will end up being more code just to check things like this than to actually implement the functionality in containerd. Just make the call and let it fail.

In general, an orchestrator is going to want to know these kinds of things upfront as it'll make a decision about whether to place a container on a given host based on things like this. This becomes important in a heterogeneous fleet where hosts have different sets of capabilities or where a fleet is being upgraded to a new version of containerd (or new hosts are being added to the fleet with a different version).

Suggesting that an orchestrator just call the RPC to find out whether some functionality is supported is going to require an orchestrator to build probes for behavior that it runs during initialization. This will end up being a lot of duplicated work across orchestrators and can introduce additional latency for a host joining a fleet as the probes execute.

@crosbymichael
Copy link
Member

@samuelkarp well you have an agent running on each server right? Can you just let it do the checks and expose that information to the higher levels?

@samuelkarp
Copy link
Member

@samuelkarp well you have an agent running on each server right? Can you just let it do the checks and expose that information to the higher levels?

Yes, the agent would either be reading the capabilities from containerd via the API, or would need to perform the probes that I described.

@crosbymichael
Copy link
Member

ok. my first suggestion is that we keep this out of the API and provide a client package that can handle these checks for you.

@samuelkarp
Copy link
Member

That's not an unreasonable place to start; that will at least reduce the amount of duplicated work for various orchestrators. Long-term though I think it still makes sense for containerd to provide this information via the API once we've settled on exactly what it would look like.

@stevvooe
Copy link
Member

@AkihiroSuda Back to the original concern for the issue, I think we should use a slash syntax here:

containerd.io/grpc/content
containerd.io/grpc/snapshotter
containerd.io/snapshotter/overlay

The convention for this would something like this:

<namespace>/<interface>/<driver>

In english, we say that <driver> provides an <interface>.

This would at least address the original concern around naming.

@samuelkarp Should we open up an issue about caps?

@AkihiroSuda
Copy link
Member Author

SGTM, but <interface>/<namespace>/<driver> might be more logical, because <interface> is globally defined rather than <namespace>-local. WDYT?

@stevvooe
Copy link
Member

@AkihiroSuda The interface is part of the namespace.

@AkihiroSuda
Copy link
Member Author

So if I develop my own snapshot driver, will it be containerd.io/snapshotter/foo rather than mydomain.example.com/snapshotter/foo?
It might cause conflict if other people develop their own containerd.io/snapshotter/foo as well.

@AkihiroSuda
Copy link
Member Author

@stevvooe PTAL?

@stevvooe stevvooe added this to the Alpha Release 1 milestone Jun 7, 2017
@AkihiroSuda
Copy link
Member Author

#994 is going to solve this issue

@samuelkarp
Copy link
Member

@stevvooe Apologies, I missed #781 (comment). Do we need a separate issue for capabilities? I thought that's what this issue was.

@stevvooe
Copy link
Member
stevvooe commented Dec 2, 2017

@samuelkarp @AkihiroSuda I think the intent here was pretty much covered with the introspection api. If there is more discussion to be had, let me know and I'll reopen.

@stevvooe stevvooe closed this as completed Dec 2, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
0