Make file attribute changes more atomic #331
Closed
mark
wants to merge 1 commit from
(deleted):__file-atomic-attributes
into master
pull from: (deleted):__file-atomic-attributes
merge into: ungleich-public:master
ungleich-public:master
ungleich-public:7.0
ungleich-public:6.9
ungleich-public:py3.10
ungleich-public:ander/__package_apt_update_index
ungleich-public:haproxy-dualstack
ungleich-public:ander/__sed
ungleich-public:beta
ungleich-public:ander/os_version_debian_sid
ungleich-public:evilham-compatibility-fixes
ungleich-public:ander/__rsync
ungleich-public:__snakeoil_cert
ungleich-public:ander/update_readme
ungleich-public:__download_improvements
ungleich-public:feature/onchange
ungleich-public:cleanup/string-formatting
ungleich-public:feature/type-relationship-graph
ungleich-public:cleanup/ssh-auth-keys-types
ungleich-public:__letsencrypt_cert-fix-hooks
ungleich-public:bugfix/preos-debug
ungleich-public:bugfix/in-script-import
ungleich-public:6.8
ungleich-public:bugfix/sphinx-docs-build
ungleich-public:6.7
ungleich-public:cherry-pick-2f433a14
ungleich-public:bugfix/make-code-consistent
ungleich-public:6.6
ungleich-public:regain-py3.2-support
ungleich-public:6.5
ungleich-public:bugfix/multiple-log-lines
ungleich-public:matterbridge
ungleich-public:coturn
ungleich-public:alpinefix
ungleich-public:matrix
ungleich-public:new-type/network-interface
ungleich-public:feature/process
ungleich-public:6.4
ungleich-public:feature/info-command
ungleich-public:feature/libexec
ungleich-public:6.3
ungleich-public:preos-plugins-dir-opt
ungleich-public:gitlab-ci
ungleich-public:6.2
ungleich-public:order-dep-fix
ungleich-public:6.1
ungleich-public:6.0
ungleich-public:build/support-pip-from-git
ungleich-public:feature/shell-lib
ungleich-public:5.1
ungleich-public:feature/support-type-deprecation
ungleich-public:5.0
ungleich-public:feature/python-types
ungleich-public:4.11
ungleich-public:4.10
ungleich-public:shellcheck
ungleich-public:4.9
ungleich-public:4.8
ungleich-public:freebsd-improvements
ungleich-public:new-prometheus
ungleich-public:key_value-onchange
ungleich-public:feature/output_streams
ungleich-public:AnotherKamila-patch-1
ungleich-public:__letsencrypt_cert-fixes
ungleich-public:letsencrypt-cron-fix
ungleich-public:4.7
ungleich-public:newtype-__letsencrypt_cert
ungleich-public:os_explorer_devuan_fix
ungleich-public:prometheus-exporter-fixes
ungleich-public:daemontools-for-fbsd
ungleich-public:type-prometheus-exporter-from-master
ungleich-public:prometheus-more-fixes
ungleich-public:4.6
ungleich-public:4.5
ungleich-public:fix-j
ungleich-public:steven-backport
ungleich-public:4.4
ungleich-public:prometheus-fixes
ungleich-public:grafana_dashboard
ungleich-public:prometheus
ungleich-public:daemontools
ungleich-public:consul_improvements
ungleich-public:feature/trigger
ungleich-public:4.3
ungleich-public:4.0-pre-not-stable
ungleich-public:4.2
ungleich-public:4.1
ungleich-public:4.0
ungleich-public:feature_install_and_preos
ungleich-public:3.1
ungleich-public:no-dot-cdist
ungleich-public:random_dot_cdist
ungleich-public:feature_yum_url
ungleich-public:feature_files_export
ungleich-public:3.0
ungleich-public:2.3
ungleich-public:2.2
ungleich-public:2.1
ungleich-public:ssh_callback
ungleich-public:2.0
ungleich-public:archive_shell_function_approach
ungleich-public:1.7
ungleich-public:1.6
ungleich-public:1.5
ungleich-public:1.4
ungleich-public:1.3
ungleich-public:1.2
ungleich-public:1.1
ungleich-public:1.0
No reviewers
Labels
No labels
bugfix
cleanup
discussion
documentation
doing
done
feature
improvement
packaging
Stale
testing
TODO
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
4 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".
No due date set.
Dependencies
No dependencies set.
Reference: ungleich-public/cdist#331
Reference in a new issue
No description provided.
Delete branch "(deleted):__file-atomic-attributes"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I ran into a probably niche case with the file type. When configuring files related to
ssh. I saw this in two cases:
The root cause seems to be that the upload of a new or modified file is executed through the code generated in gencode-local. This creates commands to generate a unique remote filename, uploads the file and moves it into the final location.
But file attributes are put in place in gencode-remote, which will always run in a seperate ssh session. Any configuration done impacting the ssh connection required can make for a situation where the configuration run is aborted and the target is left in a partially unreachable state for ssh.
The only option I could come up with to mitigate this, is to move all code that determines the changes that need to be done to the file on the target to the gencode-local manifest and generate a command sequence that is run through a single ssh session.
I've tested this and it seems to work with the testcases and production manifests I am running.
I'm not sure if this is the best solution for this problem or that I'm overlooking some other option.
Regards,
Mark
Hey @mark! Thanks for the PR.
I am not fully getting the impact just yet, but moving the whole logic into
__gencode_local
as a starter "feels" wrong (doesn't mean your solution is right).Can you describe the practical implications of the non-atomicity that you are experiencing?
The
__file
type is indeed a bit special, as it uses most features of cdist and from the get-go is the most intensive type :-)Hi Nico,
No problem, I'll describe the case in more detail. I ran into this issue recently when deploying an filter script which is used to geoblock ssh sessions through hosts.allow.
For context, the hosts.allow:
And hosts.deny:
The filter script sometimes gets updates, so this is deployed with cdist (the hosts.allow/deny also, but that is a one time thing, they don't change, so it isn't relevant). I've made a simple init manifest to demonstrate:
So, starting out with a state of the client that works, the filter file looks like this on the client:
Now I run the type (only thing in the manifest) from the cdist server (after having made a modification to the filter script ofcourse):
Then, looking on the client, the geofilter file looks as follows:
From what I see what goes wrong in this specific case is the file is copied to the client in the gencode-local. After the copy it is moved into it's final destination. This step is done in 1 ssh command. Then the code generated by gencode-remote is ran. These are the commands to change the file's attributes. Those are set in a new ssh command. But the ssh command will now fail, because geofilter is not executable yet.
There is another case with wich I ran into the same problem, but I think this (hopefully) illustrates what I ran into.
Again, not sure the way I tried to fix it is the right one, but I couldn't think of anything else to make the move to the final destination and the attribute change more atomic..
Regards,
Mark
Thanks a lot for the insight, @mark! And I get the problem and rephrasing it in my own words for a later reference:
Good news is, we probably don't need a patch at all (or at least not to solve your problem):
Now there are two things that we might still want to patch:
We do have another criteria to watch:
My first idea was to shift the
mv
into code-remote and executing the permissions change before the move.Another direction would be to include more into gencode-local. Not exactly sure what's the best way atm, will probably need a couple of days to think about it.
Hi Nico,
As far as I know I don't have any specific changes active regadring the use of control sockets on the cdist server. I will look into that to see if there is something wrong there, because if new connections through the control socket isn't checked against hosts.allow anymore this shouldn't happen.
For me the problem isn't as much that file creates a new file with the wrong permissions, it is the split that occurs between gencode-local and gencode-remote. In my opinion the way gencode-local is used within the file type is wrong. It makes sense to have the remote copy there. But command run on the target should not be in gencode-local. But that is not an option if you want to find a safe, temporary filename on the target to place the file to be transferred. That is where the problem originates. The filename is determined through mktemp in gencode-local and needs to be used there to copy the file to remote. But as that filename is stored in the environment it is only available in the scope of the commands run from gencode-local. And thus you need to place the move to the final destination on the target there (which is again a command run from gencode-local on the remote).
Putting the move in gencode-remote would be the complete solution, but how do you generate a safe, temporary filename where the file to be copied can be placed that is available for gencode-local and gencode-remote?
I thought about creating one always with an explorer and always removing it even if it isn't used. But that is just very ugly in my opinion..
Regards,
Mark
Hey Mark,
I was thinking in a similar direction, that the mv should just be on "on the other side".
One problem that we have as a base is, where to temporarily copy the file over and do we actually need a copy?
In theory we could copy the file inside /var/lib/cdist, into the object folder. This might be a bit tricky in terms of space requirements, but would give a secure (/var/lib/cdist is cdist only writable by us) destination with predictable naming.
I think in the beginning of cdist some people had specific mount options for /var, which made this tricky, but I might also be mistaken on that one and /var/lib/cdist might be an option.
Another approach I was thinking about is generating the filename like
mktemp -u
, however that one is not secure.Then again,
code-local
could in theory write the filename into the object directory on the other side and thuscode-remote
could actuallySo far I'd say that could be the best solution.
pinging another oldtimer @steven, he might remember why we opted for mktemp in the first place.
Hi Nico,
Writing the filename to some place that is also accessable for the remote side would be a great way out of this!
But from what I understood of the documentation is that the code generated by gencode-local and gencode-remote has read-only access to information referenced by $__object and $__object_id. The unique filename should be generated on the remote side. This sounds like the best solution, as it will then be generated at the same location as the original file. So any problems like filesystem sizes etc. should not be relevant (or at least expected, as that is the filesystem the modification is made). It isn't possible to generated the unique filename during the runtime of gencode-local, so it must be in the generated code. And then it can't communicate the resulting filename to the code generated by gencode-remote..
So you would want to have something that executes before gencode-local and gencode-remote that provides the unique filename on the remote side. And I think the only option there is the explorer. But that is pretty ugly imho if the explorer actually also generates the file (empty) on the filesystem, because that would always require cleanup. If the explorer only generates a safe filename for later use, that would be less of an issue I think. But that is ofcourse less safe (but pretty simple to implement ;) ).
Regards,
Mark
If I am not mistaken and the cdist-reference so far agrees with be, the
$__object
variable is available for code:See https://www.cdi.st/manual/latest/cdist-reference.html for details.
Do you want to give it a spin with adding something on the line of
and then in gencode-remote
?
Hi Nico,
Did a quick test, but when executing the output from gencode-remote I get:
From what I read in the documentation I assumed this would not work because of the statement:
Assuming I did the test correctly, it seems to match the documentation (always good ;) ). This would be a nice way to solve it..
Regards,
Mark
You are right, we did not really consider the IPC between code-local and code-remote on the remote side.
I tested using a copy of the
__file
type:resulting into
Using
for debugging. We see the old logic and order:
code-local and code-remove
Using gencode-local and remote with
shows we have different paths references to it:
And in between the code-local execution and the code-remote execution is no synchronisation between the
$__object
directory anymore.So long talk, what would probably make sense is to add a reference to the remote path of the object to gencode-local as well as code-local, probably naming it
__object_remote
.Hi Nico,
That sounds like a good solution. Would maken for a nice fix for this problem, and also the possibility to clean up the __file type, moving the mv from gencode-local to gencode-remote.
Regards,
Mark
Re IPC, this would work, not?
And then in __file/gencode-remote: (pseudo-code):
Otherwise, could we maybe (ab|re)use
__cdist_object_marker
to have a name that we know both in gencode-local and in gencode-remote?e.g. __file/gencode-local:
e.g. __file/gencode-remote: (pseudo-code)
Just thinking out loud.
Hi Steven,
Tried the changes with __cdist_object_marker, and that seems to work fine! I guess it is a bit less secure then running mktemp on the destination as this way you assume there is no (important) file on the remote which has the same name as the destinationfile with the generated object marker, but I would find that a very reasonable assumption.
BTW, did do the mv first in the gencode-remote before the chown to prevent issue's with changing only ownership/modes for an already existing file. I don't think that should be an issue, as all generated code from gencode-remote is run in a single ssh session, correct?
Regards,
Mark
Just noticed that this can not work because gencode-remote is executed before code-local.
We could off course check ourself if the assumed upload destination already exists and error out before uploading. Then I'd say we're about as safe (or unsafe) as we were before.
Hi Steven,
Good point. Tested this. Initially I tried to do it by adding an explorer to the type. I think that would be the nicest place to check, but it seems that __cdist_object_marker is not (yet) available during the explorer stage.
So I reintroduced it through a $__remote_exec in the gencode-local, which seems to work fine, but is a bit "unclean" I think.
Regards,
Mark
Why unclean? Something like this would be pretty ok I think:
An explorer is the wrong tool here. I think the time between running the explorer and the file uploading is to long.
Oops, forgot something:
Should probably be something like this instead:
Hi Steven,
Because gencode-local now generates code that runs commands on the target, which should be done by gencode-remote.
The solution works fine, have not seen the previous lockout behaviour :)
Regards,
Mark
True. But in this case there's just no other way.
The only alternative would be to execute code in gencode-* instead of code-* which would be even worse.
Cool!
Can you wrap it up and either re-use (force push to) this PR or create a new one so we can drive this one home?
Thanks!
Hi Steven,
Completely agree, that would be ugly :)
I implemented the diffs you proposed and did some retesting with my test manifest. All my tests execute ok (except the one with a space in the filename, but that one also failed on the original __file implementation, have to look into that further).
I've commited the changes to git. I see that gitea updated the files in this PR, is that sufficient, or do I need to make some additional changes (not too familiair with gitea and updating a PR ;) ).
Regards,
Mark
Thanks Mark!
Can you please rebase on master (unless it did not change) and squash those commits so we have a cleaner history?
Then I'm all +1.
Cheers,
Steven
ea4a2aac2d
to30193bdae0
@ -95,0 +96,4 @@
exit 1
} || {
# Put a towel in place.
$__remote_exec $__target_host "umask 077; touch \"$destination_upload\""
Just wondering, are we adding a behaviour change here? I.e. before we unconditionally deleted the file/directory/socket/whatever. Now we fail if it exists?
NVM, this is just the temporary location. This still has a race condition, addressing this in a bigger comment.
Before we used mktemp which would have also failed if it could not have created a file, not?
Of course the chance of mktemp failing is like zero, with at least 3 X's.
mktemp works differently. What we are doing now is similar to
mktemp -u
.Again, whether this is an actual problem, is a different question.
The typical issue mktemp is trying to solve:
What we do is:
The question we need to answer is:
Guys,
I think we are having a security problem here.
Let's say we
__file /some/dir/foo
and a user has write access to /some/dir and the user can runps
on the target system.Thus the user (=attacker) knows our suffix by means of ps. She can monitor when we test for it (wait...) and afterwards create /some/dir/foo.oursuffix/ and thus make us copy into a directory.
Then until we do the mv, the directory can be replaced by a file of choice by the attacker.
Now, the question is whether this is a real problem or not, because it assumes that an attacker has write access to the directory we want to place a file in. If that is already the case, she can probably also delete our newly created file anyway.
So maybe this is actually a non-problem, but would like to shortly hear your thoughts on it, too.
I don't think this problem is solvable given the current implementation of cdist.
Maybe using __cdist_object_marker makes this worse, as it's already visible long in ps on the target long before it's first used by __file. So an attacker has more time.
Instead of (re|ab)using __cdist_object_marker we could:
Generate a random file name in gencode-local (NOT code-local).
Store it in __object/files/file-name or something.
Grab that name in gencode-remote (NOT core-remote).
But this soon ends up in bikesheding as the only way to solve this for good would be ditch ssh/shell and use some rpc or the like that does not leek this information in the env or in the process tree.
Hi Nico,
Depending on the target (its security profile) and the type of file, an option would be to just overwrite the existing file. This would mitigate this issue. But it would not be a global solution. The behaviour could be easily selectable by an extra boolean option to the __file type.
Regards,
Mark
Are there any changes you want me to make or test for this pull request? Assuming it is acceptable security wise?
Regards,
Mark
I think if we do it like this, we are as safe as we can be.
If I'm not mistaken an attacker has no way/time to predict or react.
@nico RFC, let's get this PR in asap.
LGTM - please go ahead. I'd suggest we make a major release out of this one, as changes on the file type affect basically everyone. So if we screwed something up, we have at least an indicator for everyone.
Also, 7.0.0 is a very nice release name.
After reading through the comments and start skimming through in the middle cause of the heavy discussion happen there - it's really hard to follow.
Reading the first part, I would suggest using a predictive filename (so it won't leave orphanded files on aborts, like it's done in
__download
) and do the move incode-remote
. Maybe the move happens after permissions are correctly set to the tempfile.As mentioned, it's hard to follow the discussion of which higher scince/magic blocked simple solutions, and looking at stevens latest patch looks very clunky for the cdist approch.
Targeting the security considerations, we create a root file and override a root- or user-file. The attacer must be root (where we already lost the war) or the file owner/group, which don't lead to privileage escalation (at minimum when we change permissions before the move). He just can do what he already can. And if you are worried about moving files to a directory destination, please use
mv -T
. If moving files with cdist is a problem, we have a big one for more cdist types that should be handled separately.In the end, I just wan't to call for a simple and cdist-elegant solution, what Nico and Steven already used to block other MR's. Please elaborate the current state/problem and why an simple and cdist-like solution is (maybe) impossible. Or maybe going back to start and try a simpler solution.
@matze tl;dr:
the problem that @mark is having is that he is defining a __file with cdist which
is used by his ssh config.
The current implentation of cdist's __file type needs 2 ssh connections to
The second ssh connection fails for him, because the first one uploaded a new file but did not set the attributes. e.g. the file is there but not executable.
The proposed patch fixes that by making all code that has effects on the original file happen in a single ssh connection.
This is the cleanest/best solution I can think of.
The one thing I'm not 100% sure of is:
Do we have to delete the existing file before moving or
does
mv -f
handle any possible case for us.I've got those points, too. It's not hard to understand the real problem, but the big discussion in the middle of this PR. About security and predictive filenames, that it's posible to modify things in
code-local
andcode-remote
, ... (it was the point I started to skip through)Since all times (at least I think),
__file
uses a random tempfile. Lately, I've wrote a patch for__download
in https://github.com/cdist-community/cdist-conf/pull/28 to change tempfile behaviour. I've discovered, that it uses a single predictive tempfile and adopted my change to do the same. What I've found out, is that you can get a lot of orphanded tempfiles if the type will fail. Separating changes intocode-local
andcode-remote
, this error point will grow, because it will be more likely a paralell execution can abort the__file
processes, too (if I understand the cdist paralism correctly).Because of the following reasons ..
.. I propose to make a determant predictive tempfile like
${__object_id}__file_tmp
which solves the ugly$__remote_exec
ingencode-local
with the use of dry-run tempfiles (mktemp -u
) shared to thegencode-remote
and instead provide a pretty and simple solution.The move of the tempfile to the real destination should happen after all file attribute changes happend, so it's non-interruptive to any application that relies on the file. If the file at the end is really moved by
rm -rf $dst; mv $tmp $dst
or with a singlemv
command doesn't really matter.In the great disscussion of this MR, there might was a security consideration against such a predictive filename, which should be checked against if this is a realy problem for this suggestion.
Sorry for the late response.
I think any security concerns are related to the temporary filename that is
going to be used would already exist on the target (like Nico's example of
creating a directory).
The current approach is to use mktemp to generate a safe filename and reserving
it on the filesystem, negating this problem, but not being able to pass the
filename from code generated by gencode-local to code from gencode-remote.
The other is to not check the target system and append something to the
filename in order to make it unique. The last one is a game of chance.
So I think it would come down to how much randomness do we add to the filename
in order to make it unique enough to withstand an attach. With (I think) at
least 255 characters possible in a filename, it should be possible to generate
something quite unique? Append a 50 character suffix to a file (with
a-zA-Z0-9) should already give about 300bits of uniqueness. Which is pretty
overkill IMHO..
As long as the uniqueness is not generated (and not visible until file
creation) on the target, I think that should be pretty ok.
And that was (as far as I can see) basically what the last patch of Steven
did, using mktemp on the server side in order to generate the file. From
a security perspective it might be better not to check the upload destination,
that could give away the filename. Just upload it through the code generated
by gencode-local. As long as the filename used in gencode-local can be passed
to gencode-remote, so it can change the attributes and move it in place it
should be ok.
Not checking the destination of the filename with the very random name already
exists would (for me) be a very acceptable risk. Chances of that happening are
pretty remote.
Regards,
Mark
@matze
now cdist comes and uploads the changed my-file to your predictive filename
I'll use this for a while to see if there are any side effects.
bdc8e64df0
thanks Mark, currently thinking about how this can be a problem
also, thanks Steven, I've saw your comment while writing mine (and already found this, too)
TL;DR
I've wrote some thoughs about the security problem that wasn't really described at the time I've started this writing. At the end, I've added some - in my mind - prettier solution, which might be used cdist-globaly.
Important sections you shouldn't miss:
But it's now getting late and I should go to bed (didn't expect it would need that long) - good night :-)
possible security problems
Remembering the security findings in
please
, the most exploiting way would be still only to interfer, but not to actively exploit something. If you can predict the filename of the uploaded file, you can create a symlink to mostly any location on the filesystem. Asroot
writes this file, it will almost work every time. Also the file will be in the mode0600
, so no other user can read the file. Concratulation, you did a DOS by overwriting a maybe important file with some crap (as you can't control the user-defined content). Only in edge-cases, you can really use the file redirection to do some evil. That it will gain broad usage, it must be possible with a public core-type.As Nico and Mark said, with prior knowleage of the filename, you can create a folder with the excat same name.
scp
orsftp
will copy the file inside the folder all the times. In an atomar call (doing nothing else than a single remote copy), you can't influence anything - it will copy the file inside the folder every time. After this, permission changes are applied to the folder, but not to the file, means the file stayroot
with mode0600
. The attacker can't write or even read the file. The directory will be moved to the destination. Program which wan't to read this file fails. DOS. Nothing more.Last option works if the attacker has permission for the folder. Then, he can move or even delete the temp file or even the original file. Linux permissions and nothing special.
security conclusion
That said, only the symlink thing can theoretical be used for exploits if you know the content that will be copied over and if it can be even used for it (file is still
0600
, so not read- or writeable). Else you can only interrupt the file process so it won't be able to automaticly restore. The DOS maybe even just happen only after a service reload/restart.To perform all the DOSes, the attacker needs a user which can write to the folder. Then, it should be able to write to or move the file he needs to by himself. Only the exploit with symlinks explicitly requires cdist to work, which is the only convincing argument I've found. Protecting against the other problems isn't worth it, as it makes no difference if cdist is involved with.
how to avoid
How to avoid this exploit? The current live
__file
implementation uses both:Noting that there are doubts in this MR that 1. isn't really save because of the "long" time between the tempfile is created and it will be copied over.
An easy way of avoiding the above problem is to use a tool that is aware of this. This is nice, because we will never get the level of an atomar filesystem action as
mkdir
has and is often used for locks, but it will get better. Instead of relying on the shell and kernel that it forks the two commands directly after, it's only one fork. Not perfect, but theoretical less effort/time inbetween these checks. I've wrote to much, because now tested thatcp
ormv
will never write to the symlink destination, rather replacing it. Still though aboutmv -T
which won't override folders ...Symlink problems happen with
scp
; both with the oldSCP
and the newsftp
protocol. This means, the previously suggested predictive filenames from me means death. At least when you assosiate DOS = death. No, serious, I know understand why they shouldn't be used now. This also means @ander needs to refactor his__download
and co.But now coming back utilising the shortest time for changes: I've noticed if we use
sftp
instead ofscp
in command line mode, we can batch things in a single command. I'm pretty sure we can script this with passing the commands tostdin
. Then, we can script arm
pretty close to anput
, utilise themv
, or something similar. Even things need to go over the network, it is better than simply doing a$__remote_exec
and then a$__remote_copy
or soemthing similar ...other funny things we can do (but better not)
Which should work, too, is to use piping from
ssh
. Like this:Will work, and at least I think ssh piping should be POSIX-compatible, but might be an hard abuse of it ...
The other thinks are HERE-documents, like this:
I don't propose this seriously, but it came to my mind, too. If special characters are making trouble in the HERE-document, we can use
base64
or some accient email transcoding which served the same usecase (to embed binary content into ascii emails, which usually increased size by 20% :P). Don't think we can solve it with POSIX compatibility.The idea was to have more control writing it with piping. It's still vurnable to a symlink cause of the write, but the timeframe should be low. As indicated by the title, we should better skip them. Others will do better ;-)
solution compling with cdist principles
After all this wall of text in this MR disscussion, I will finally present the way I've found that should comply (after all the info I've got by writing this comment): Copy the file with an hardly predictive file name so nobody whats to trap us with a symlink. Then move the file to somewhat predictive. As
mv
overrides symlinks instead of writing to, this will be save. Then, we can chill and wait till an other script (code-remote
in this case) picks up the file with no real effort guessing the name. And a move in the same directory doesn't really cost anything.Example (you know how I mean the pseudo script):
alternative cdist-wide solution
The above solution will work based on unpredictability, just like Mark suggested it and was already in discussion. But it will require to change every other type utilising
$__remote_copy
. Idealy, they shoud use the__file
type, but others are around, too, like__download
and more. By writing thesftp
section, I thought about such a general script replacing$__remote_copy
.This means a script utilising
sftp
with an fallback forscp
. Ifscp
is required, we can make the same as withsftp
, which doesn't really drop the security level.The only problem is, that the logic is very general - but we make life difficult when somebody whats to use his own
$__remote_copy
with the same security. Maybe stacking of different$__remote_copy
's - Ander would like it combine it with the sudo$__remote_copy
.@matze, I think there is one important point to highlight: the attack for predictable names only applies to directories that are user writable. Typically mktemp and friends have to deal with /tmp.
cdist is not affected by this in probably most, if not virtually all cases, for multiple reasons:
My main motivation was to raise awareness, but there is not much practical need to change behaviour for writing files/directories/symlinks to directories that are not user writable anyway.
Given that scp is deprecated anyway I'm all open to look at sftp at some point. But not now.
This PR/issue was to discuss and fix the following problem:
The current implentation of cdist's __file type needs 2 ssh connections to
There are cases where using 2 ssh connections causes problems.
We've implemented a fix that addresses this by making all changes in a more
atomic way within a single ssh connection.
Pull request closed