-
Notifications
You must be signed in to change notification settings - Fork 3.5k
naming convention for plugins / capabilities #781
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@AkihiroSuda I think either of these proposals will require us to do per service versioning, as opposed to versioning of the entire package, which is what we do now. I'd prefer we match these to the protobuf packages, if possible, but we'd have to add that |
How about non-grpc plugins? |
@AkihiroSuda |
Hey @AkihiroSuda, thanks for pinging me. I've been thinking about this and I don't think this is the right abstraction for my needs. Like I mentioned in #776 (comment), I think that higher level orchestrators really are going to care about what they can do with containerd (capabilities) rather than how it's implemented (specific plugins with specific versions). This is important because orchestrators may need to choose where to place a given container on a node in the cluster because of what the container requires (e.g. a container that should share a pid namespace with the host should run on a node that has that capability). Just exposing the list of plugins (or services, etc) would require an ochestrator to maintain a mapping of plugin-to-capability internally, and to keep track of what functionality changes based on version as well. With Amazon ECS, we need to maintain this type of mapping today for Docker features, but we use a capability system for features of our agent (on-host component). If we look at modeling capabilities instead of (or in addition to) plugins, we could have ones like this:
@Random-Liu, any thoughts on this from a k8s perspective? |
Makes sense, how about the following convention Image Format (decoupled from distribution method):
Image Distribution:
Snapshotter:
Linux lifecycle:
Linux runtime config spec:
Linux cgroups:
Linux namespace:
Linux security:
Windows lifecycle:
|
We need to be careful how much policy we add to containerd. We don't want a lot of validation for fulfilling system level things. We just try to fulfill the users request. Validation and policy should be handled above containerd. It is a lot of code to check for these types of things and it adds a lot of complexity. |
The cap ideas look good but I think we need to start with a client side package that can be imported in the caller and do the checks client side. If we feel confident in the implementation and that we have the use cases figured out, we can move it to a GRPC service. We can design the package where we have some built in checks but also allow the client to register more. If we start with host level checks it could look something like: system.RegisterCheck("apparmor", func() (bool, error) {
data, err := readFile("/sys/kernel/apparmor/enabled")
if err { return false, err }
return data == "Y", nil
})
for name, supported := range system.Checks() {
if name == "apparmor" && supported {
// generate profile
}
}
// or
checks := system.Checks()
if checks["apparmor"] {
// gen profile
} I'm fine with a more advanced naming schema but it would be nice to keep it string : bool and not try to return some type of string data that needs parsed further. What do you all think? |
Also we don't have a way to inspect the runtimes ( no API in OCI ) and i'm not sure this is something we want to expose either. It makes me really nervous that you somehow need this type of information to run containers and you do these checks at runtime. You have to have some type of compatibility of things that you support and what you don't. Things like |
In general, an orchestrator is going to want to know these kinds of things upfront as it'll make a decision about whether to place a container on a given host based on things like this. This becomes important in a heterogeneous fleet where hosts have different sets of capabilities or where a fleet is being upgraded to a new version of containerd (or new hosts are being added to the fleet with a different version). Suggesting that an orchestrator just call the RPC to find out whether some functionality is supported is going to require an orchestrator to build probes for behavior that it runs during initialization. This will end up being a lot of duplicated work across orchestrators and can introduce additional latency for a host joining a fleet as the probes execute. |
@samuelkarp well you have an agent running on each server right? Can you just let it do the checks and expose that information to the higher levels? |
Yes, the agent would either be reading the capabilities from containerd via the API, or would need to perform the probes that I described. |
ok. my first suggestion is that we keep this out of the API and provide a client package that can handle these checks for you. |
That's not an unreasonable place to start; that will at least reduce the amount of duplicated work for various orchestrators. Long-term though I think it still makes sense for containerd to provide this information via the API once we've settled on exactly what it would look like. |
@AkihiroSuda Back to the original concern for the issue, I think we should use a slash syntax here:
The convention for this would something like this:
In english, we say that This would at least address the original concern around naming. @samuelkarp Should we open up an issue about caps? |
SGTM, but |
@AkihiroSuda The interface is part of the namespace. |
So if I develop my own snapshot driver, will it be |
@stevvooe PTAL? |
#994 is going to solve this issue |
@stevvooe Apologies, I missed #781 (comment). Do we need a separate issue for capabilities? I thought that's what this issue was. |
@samuelkarp @AkihiroSuda I think the intent here was pretty much covered with the introspection api. If there is more discussion to be had, let me know and I'll reopen. |
Uh oh!
There was an error while loading. Please reload this page.
Update: capability seems better than plugins: #781 (comment)
Available plugins in the daemon will be detectable via GRPC #776
It would be better to consider plugin naming convention now.
e.g.
content-grpc
io.containerd.plugins.grpc.content
io.containerd.plugins.grpc.content.v1
I prefer 1, but maybe 2 is preferrable for realism?
@stevvooe @samuelkarp
The text was updated successfully, but these errors were encountered: