source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
688,253 | I have an installation of Windows 10 and Pop on separate partitions of the same drive and I want to dual boot them with systemd-boot, which is the default for Pop OS. I followed this guide (the TL;DR version is good enough) because I didn't have Windows in the boot menu selection. The guide just tells you to copy the EFI files from the Windows EFI partition into the Pop OS EFI partition so systemd-boot can recognize Windows. This works fine and both Windows and Pop appear in the boot menu. When I boot Pop there is no issue. However, when I boot Windows everything works fine for the first time, but then after a reboot cycle all Pop OS partitions disappear from the boot menu and instead the computer boots into the GRUB terminal (?? GRUB wasn't even being used before). The Pop partition is no longer recognized as bootable and I can't boot into Pop. This problem is reproducible. It happens every time I do the above steps. Any help is appreciated. | Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass>
LABEL=root / ext4 errors=remount-ro 0 1
LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.) | {
"source": [
"https://unix.stackexchange.com/questions/688253",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512000/"
]
} |
688,255 | I'm asking before trying because I already have a few things set up in Wine Stable, so I don't want to mess things up by installing something else over it. Basically, I want to install Staging because I have an app which is said to require the former to function properly under Linux (it's a music player.) Will installing Staging affect the way Wine Stable behaves? If so, how? Can I configure Wine Stable and Wine Staging separately? I'm running Debian Bullseye Stable. Thank you. | Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards. man tune2fs writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16 . And similarly, man mkfs.ext4 writes, -U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…] Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this # <file system> <mount point> <type> <options> <dump> <pass>
LABEL=root / ext4 errors=remount-ro 0 1
LABEL=backup /backup ext4 defaults 0 2 Such labels can be added with the -L flag for tune2efs and mkfs.ext4 . They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.) | {
"source": [
"https://unix.stackexchange.com/questions/688255",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/501732/"
]
} |
688,790 | I'm calculating aspect ratio height from x number, in this example I'm using 4:3 ratio, and a width of 800, the result (height) should be 600, but bash is returning 800, and I'm not sure why. I've tried other languages, most seem to have issues too, php seems to be one of few that work. PHP (returns 600) php -r 'echo 800/(4/3);' Python (returns 800) python -c "print(800/(4/3))" bc -l kinda works (returns 600.00000000000000000150) -l is "Define the standard math library", not to sure what that means, but it seems to get me closer to my goal, but where is the extra 0's and 150 coming from? echo '800 / (4 / 3)' | bc -l I'm guessing it's something to do with floating point handling, or truncating the result of 3/4 . Now I could just use php , and call it a day, but seems kinda overkill for a relatively simple calculation.
Any idea what's going on here. | Bash arithmetic is integer only. So 4/3 returns 1. And 800/1 is 800. If you can control the inputs then you can re-factor and do the multiplication before the division $ echo $(( 800*3/4 ))
600 Your other examples are also "integer". If, for example, you force python floating point by replace 4 with 4.0 then you get a different answer (Python 3 doesn't need this) $ python -c "print(800/(4.0/3))"
600.0 bc -l loads the standard math library (with functions like s() for sine, l() for natural logarithm, etc), but more importantly here, sets scale to 20. scale defines how many decimals after the radix to generate in divisions, so 4/3 there will be 1.33333333333333333333 (in effect 133333333333333333333/1e+20 ), and that explains why you get 600.00000000000000000150 . echo 'scale=1000; 800/(4/3)' | bc Will get you more precision (without having to load the math library), but you'll never get just 600 there as 4/3 cannot be represented in decimal. | {
"source": [
"https://unix.stackexchange.com/questions/688790",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/43139/"
]
} |
688,813 | I have tried using sed to read \ .
Unable to read \ and replace it with \\\\ .
I want to replace single \ with 4 \\\\ . | Bash arithmetic is integer only. So 4/3 returns 1. And 800/1 is 800. If you can control the inputs then you can re-factor and do the multiplication before the division $ echo $(( 800*3/4 ))
600 Your other examples are also "integer". If, for example, you force python floating point by replace 4 with 4.0 then you get a different answer (Python 3 doesn't need this) $ python -c "print(800/(4.0/3))"
600.0 bc -l loads the standard math library (with functions like s() for sine, l() for natural logarithm, etc), but more importantly here, sets scale to 20. scale defines how many decimals after the radix to generate in divisions, so 4/3 there will be 1.33333333333333333333 (in effect 133333333333333333333/1e+20 ), and that explains why you get 600.00000000000000000150 . echo 'scale=1000; 800/(4/3)' | bc Will get you more precision (without having to load the math library), but you'll never get just 600 there as 4/3 cannot be represented in decimal. | {
"source": [
"https://unix.stackexchange.com/questions/688813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512537/"
]
} |
690,813 | I foolishly started a job that turned out to be so big and busy that it froze everything. I wish I could type a kill command or use xkill, but the system is unresponsive, apart from the audible swapping. On Windows, Ctrl + Alt + Del helps in these situations; does Linux has a way to knock through into an overloaded system? Just saw this one and couldn't stop myself from sharing: | Ctrl + Alt + F4 opens a console window, where you can login and kill stuff as necessary or reboot the system. Use Ctrl + Alt + F2 or Ctrl + Alt + F1 to go back. In some cases you can restart the gnome session by pressing Alt + F2 , and the R in the window that opens. This should leave all programs running, but gnome itself will restart, so if the issue is in gnome it may help. If the above don't help, you can do a warm reboot, by pressing the following key sequence: While keeping pressed down both the Alt and Print Screen keys, sequentially (one by one) press the keys: R E I S U B This will sync and unmount the file system and do a safe reboot. The keys have the following meaning: R: Switch the keyboard from raw mode to XLATE mode E: Send the SIGTERM signal to all processes except init I: Send the SIGKILL signal to all processes except init S: Sync all mounted filesystems U: Remount all mounted filesystems in read-only mode B: Immediately reboot the system, without unmounting partitions or syncing Source Finally, if all else fails, keep the power-on button pressed for a few seconds to force a cold reboot, or take the power cable/battery out ;-). | {
"source": [
"https://unix.stackexchange.com/questions/690813",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/112656/"
]
} |
692,178 | I've copied a directory with cp -as /media/user/dir symlinks and now I'm terrified to rm -rf symlinks as it might delete files in /media/user/dir What is the safe way to only delete the directory structure and the symbolic links in symlinks without touching anything in /media/user/dir ? As a test, I did this: $ mkdir test
$ touch test/file
$ mkdir test/dir
$ touch test/dir/file2
$ cp -as test syms
$ rm -rf syms This test didn't touch the original test directory. Is this a complete test? Is it always like this? I don't have the space to make a backup of /media/user/dir | You may remove the directory containing the symbolic links without fear that this would also remove the original files. The POSIX specification for the rm utility says (about what happens when encountering a symbolic link): The rm utility shall not traverse directories by following symbolic links into other parts of the hierarchy, but shall remove the links themselves. And then, a bit later (in the Rationale section): The rm utility removes symbolic links themselves, not the files they refer to, as a consequence of the dependence on the unlink() functionality, per the DESCRIPTION. When removing hierarchies with -r or -R , the prohibition on following symbolic links has to be made explicit. The GNU rm manual doesn't say anything about this, but we must assume that it does not break with POSIX in this regard. The manual on other systems sometimes contains this promise explicitly. Here's from OpenBSD (FreeBSD and NetBSD has identical wordings): The rm utility removes symbolic links, not the files referenced by the
links. ... and from AIX (Solaris has a similar wording): If the file is a symbolic link, the link is removed, but the file or directory that the symbolic link refers to remains. Note that the behavior of rm with regards to symbolic links may be tested easily locally: $ touch file
$ ls -l
total 0
-rw-r--r-- 1 myself wheel 0 Feb 26 09:32 file
$ ln -s file link
$ ls -l link
lrwxr-xr-x 1 myself wheel 4 Feb 26 09:32 link -> file
$ rm link
$ ls -l
total 0
-rw-r--r-- 1 myself wheel 0 Feb 26 09:32 file A similar exercise could be carried out for symbolic links in a directory. | {
"source": [
"https://unix.stackexchange.com/questions/692178",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/233403/"
]
} |
693,003 | Say I have two possible paths I want to list directories and files under on a Linux machine: /some/path1/
/some/path2/ If I do the following in tcsh , I get 0 exit code, if at least one of path1 or path2 exists: ls -d /some/{path1,path2}/* But if I do the exact same thing in bash , I get 2 exit code, with a stderr message reporting path1 does not exist (if path1 is the one that doesn't exist). How can I make bash behave like tcsh in this case? Is there a switch to ls that I can ask it to give back 0 if at least one path exists? If neither one exists, I do expect non-zero code, which is what tcsh gives back. | Most of your questions are already answered at Why is nullglob not default? . One thing to bear in mind is that: ls -d /some/{path1,path2}/* In csh / tcsh / zsh / bash / ksh (but not fish , see below) is the same as: ls -d /some/path1/* /some/path2/* As the brace expansion is performed before (not as part of ) the glob expansion, it's the shell that expands those /some/pathx/* patterns into the list of matching files to pass as separate arguments to ls . bash , like ksh which it mostly copied has inherited a misfeature introduced by the Bourne shell in that, when a glob pattern doesn't match any file, it is passed as-is, literally as an argument to the command. So, in those shells, if /some/path1/* matches at least one file and /some/path2/* matches none, ls will be called with -d , /some/path1/file1 , /some/path1/file2 and a literal /some/path2/* as arguments. As the /some/path2/* file doesn't exist, ls will report an error. csh behaves like in the early Unix versions, where globs were not performed by the shell but by a /etc/glob helper utility (which gave their name to globs). That helper would perform the glob expansions before invoking the command and report a No match error without running the command if all the glob patterns failed to match any file. Otherwise, as long as there was at least one glob with matches, all the non-matching ones would simply be removed. So in our example above, with csh / tcsh or the ancient Thompson shell and its /etc/glob helper, ls would be called with -d , /some/path1/file1 and /some/path1/file2 only, and would likely succeed (as long as /some/path1 is searchable). zsh is both a Korn-like and csh-like shell. It does not have that misfeature of the Bourne shell whereby unmatched globs are passed as is¹, but by default is stricter than csh in that, all failing globs are considered as a fatal error. So in zsh , by default, if either /some/path1/* or /some/path2/* (or both) fails to match, the command is aborted. A similar behaviour can be enabled in the bash shell with the failglob option². That makes for more predictable / consistent behaviours but means that you can run into that problem when you want to pass more than one glob expansion to a command and would not like it to fail as long as one of the globs succeeds. You can however set the cshnullglob option to get a csh-like behaviour (or emulate csh ). That can be done locally by using an anonymous function: () { set -o localoptions -o cshnullglob; ls -d /some/{path1,path2}/*; } Or just using a subshell: (set -o cshnullglob; ls -d /some/{path1,path2}/*) However here, instead of using two globs, you could use one that matches all of them using the alternation glob operator: ls -d /some/(path1|path2)/* Here, you could even do: ls -d /some/path[12]/* In bash , you can enable the extglob option for bash to support a subset of ksh's extended glob operator, including alternation: (shopt -s extglob; ls -d /some/@(path1|path2)/*) Now, because of that misfeature inherited from the Bourne shell, if that glob doesn't match any file, /some/@(path1|path2)/* would be passed as-is to ls and ls could end up listing a file called literally /some/@(path1|path2)/* , so you'd also want to enable the failglob option to guard against that: (shopt -s extglob failglob; ls -d /some/@(path1|path2)/*) Alternatively, you can use the nullglob option (which bash copied from zsh ) for all non-matching globs to expand to nothing. But: (shopt -s nullglob; ls -d /some/path1/* /some/path2/*) Would be wrong in the special case of the ls command, which, if not passed any argument lists . . You could however use nullglob to store the glob expansion into an array, and only call ls with the member of the arrays as argument if it is non-empty: (
shopt -s nullglob
files=( /some/path1/* /some/path2/* )
if (( ${#files[@]} > 0 )); then
ls -d -- "${files[@]}"
else
echo >&2 No match
exit 2
fi
) In zsh , instead of enabling nullglob globally, you can enable it on a per-glob basis with the (N) glob qualifier (which inspired ksh's ~(N) , not copied by bash yet), and use an anonymous function again instead of an array: () {
(( $# > 0 )) && ls -d -- "$@"
} /some/path1/*(N) /some/path2/*(N) The fish shell now behaves similarly to zsh where failing globs cause an error, except when the glob is used with for , set (which is used to assign arrays) or count where it behaves in a nullglob fashion instead. Also, in fish , the brace expansion though not a glob operator in itself is done as part of globbing, or at least a command is not aborted when brace expansion is combined with globbing and at least one element can be returned. So, in fish : ls -d /some/{path1,path2}/* Would end up in effect behaving like in csh . Even: {ls,-d,/xx*} Would result in ls being called with -d alone if /xx* was not matched instead of failing (behaving differently from csh in this instance). In any case, if it's just to print the matching file paths, you don't need ls . In zsh , you could use its print builtin to print in columns: print -rC3 /some/{path1,path2}/*(N) Would print the paths r aw on 3 columns (and print nothing if there's no match with the N ullglob glob qualifier). If instead you want to check if there's at least one non-hidden file in any of those two directories, you can do: # bash
if (shopt -s nullglob; set -- /some/path1/* /some/path2/*; (($#))); then
echo yes
else
echo no
fi Or using a function: has_non_hidden_files() (
shopt -s nullglob
set -- "$1"/*
(($#))
)
if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2
then
echo yes
else
echo no
fi # zsh
if ()(($#)) /some/path1/*(N) /some/path2/*(N); then
echo yes
else
echo no
fi Or with a function: has_non_hidden_files() ()(($#)) $1/*(NY1)
if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2
then
echo yes
else
echo no
fi ( Y1 as an optimisation to stop after finding the first file) Beware those has_non_hidden_files would (silently) return false for directories that are not readable by the user (whether they have files or not). In zsh , you could detect this kind of situation with its $ERRNO special variable. ¹ The Bourne behaviour (which was specified by POSIX) can be enabled though in zsh by doing emulate sh or emulate ksh or with set +o nomatch ² beware there are significant differences in behaviour as to what exactly is cancelled when the glob doesn't match, the fish behaviour being generally the more sensible, and the bash -O failglob probably the worst | {
"source": [
"https://unix.stackexchange.com/questions/693003",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/227020/"
]
} |
693,025 | I'm currently trying to install Splint in an openSUSE Leap 15.2 distro locally (without sudo privileges). I tried to follow the instructions here : In my home directory: git clone https://github.com/splintchecker/splint I entered the splint directory after that. The next instruction was to run configure . But there was no such file present. Following the suggestion here , I ran: autoreconf -i And then: ./configure
make At this point, the build appeared to be a success. So I tried running splint and got a command-not-found message. The answer here seemed to suggest running make install , so I tried that next, but to no avail. Are there some other steps I should take? Did I mess up somewhere? Edit: Here is the output from make . The output was too long so I have truncated it. (CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/sh /home/styx/splint/config-aŭ/missing autoheader)
rm -f stamp-h1
touch config.h.in
cd . && /bin/sh ./config.status config.h
config.status: creating config.h
config.status: config.h is unchanged
make all-recursive
make[1]: Entering directory '/home/styx/splint'
Making all in src
make[2]: Entering directory '/home/styx/splint/src'
bison -v -t -d --debug --no-lines -p lsl signature.y
signature.tab.h generated
cat bison.head signature.tab.h bison.reset >signature_gen.h
bison -v -t -d --debug --no-lines cgrammar.y
cgrammar.y: warning: 159 shift/reduce conflicts [-Wconflicts-sr]
cgrammar.y: warning: 123 reduce/reduce conflicts [-Wconflicts-rr]
* Note: Expect 159 shift/reduce conflicts and 123 reduce/reduce conflicts. (see cgrammar.y for explanation)
cgrammar.tab.h generated
cat bison.head cgrammar.tab.h bison.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cgrammar_tokens.h
bison -v -t -d --debug --no-lines -p yl llgrammar.y
llgrammar.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr]
* Note: Expect 2 shift/reduce conflicts
llgrammar.tab.h generated
cat bison.head llgrammar.tab.h bison.reset >llgrammar_gen.h
bison -v -t -d --debug --no-lines -p mt mtgrammar.y
mtgrammar.y: warning: 11 shift/reduce conflicts [-Wconflicts-sr]
* Note: Expect 11 shift/reduce conflicts.
mtgrammar.tab.h generated
cat bison.head mtgrammar.tab.h bison.reset >mtgrammar_tokens.h
flex -L -o cscanner.lex.c cscanner.l
cat flex.head cscanner.lex.c flex.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cscanner.c
cat bison.head cgrammar.tab.c bison.reset | /usr/bin/sed 's/YYSTYPE/cgrammar_YYSTYPE/g' | /usr/bin/sed 's/lsllex/cgrammar_lsllex/g' >cgrammar.c
cat bison.head mtgrammar.tab.c bison.reset >mtgrammar.c
cat bison.head llgrammar.tab.c bison.reset >llgrammar.c
cat bison.head signature.tab.c bison.reset >signature.c
/usr/bin/grep "FLG_" flags.def >flag_codes.gen
make all-am
make[3]: Entering directory '/home/styx/splint/src'
gcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT cscanner.o -MD -MP -MF .deps/cscanner.Tpo -c -o cscanner.o cscanner.c
mv -f .deps/cscanner.Tpo .deps/cscanner.Po
...
...
...
gcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT lsymbol.o -MD -MP -MF .deps/lsymbol.Tpo -c -o lsymbol.o lsymbol.c
mv -f .deps/lsymbol.Tpo .deps/lsymbol.Po
gcc -DHAVE_CONFIG_H -I. -I.. -I./Headers -I. -g -O2 -MT mapping.o -MD -MP -MF .deps/mapping.Tpo -c -o mapping.o mapping.c
mv -f .deps/mapping.Tpo .deps/mapping.Po
gcc -g -O2 -o splint cscanner.o cgrammar.o mtgrammar.o llgrammar.o signature.o cppmain.o cpplib.o cppexp.o cpphash.o cpperror.o context.o uentry.o cprim.o macrocache.o qual.o qtype.o stateClause.o stateClauseList.o ctype.o cvar.o clabstract.o idDecl.o clause.o globalsClause.o modifiesClause.o warnClause.o functionClause.o functionClauseList.o metaStateConstraint.o metaStateConstraintList.o metaStateExpression.o metaStateSpecifier.o functionConstraint.o pointers.o cscannerHelp.o structNames.o transferChecks.o varKinds.o nameChecks.o exprData.o cstring.o fileloc.o message.o inputStream.o fileTable.o cstringTable.o valueTable.o stateValue.o llerror.o messageLog.o flagMarker.o aliasTable.o ynm.o sRefTable.o genericTable.o ekind.o usymtab.o multiVal.o lltok.o sRef.o lcllib.o randomNumbers.o fileLib.o globals.o flags.o general.o osd.o reader.o mtreader.o clauseStack.o filelocStack.o cstringList.o cstringSList.o sRefSetList.o ctypeList.o enumNameList.o enumNameSList.o exprNodeList.o exprNodeSList.o uentryList.o fileIdList.o filelocList.o qualList.o sRefList.o flagMarkerList.o idDeclList.o flagSpec.o globSet.o intSet.o typeIdSet.o guardSet.o usymIdSet.o sRefSet.o mtscanner.o stateInfo.o stateCombinationTable.o metaStateTable.o metaStateInfo.o annotationTable.o annotationInfo.o mttok.o mtDeclarationNode.o mtDeclarationPieces.o mtDeclarationPiece.o mtContextNode.o mtValuesNode.o mtDefaultsNode.o mtAnnotationsNode.o mtMergeNode.o mtAnnotationList.o mtAnnotationDecl.o mtTransferClauseList.o mtTransferClause.o mtTransferAction.o mtLoseReferenceList.o mtLoseReference.o mtDefaultsDeclList.o mtDefaultsDecl.o mtMergeItem.o mtMergeClause.o mtMergeClauseList.o exprNode.o exprChecks.o llmain.o help.o rcfiles.o constraintList.o constraintResolve.o constraintGeneration.o constraintTerm.o constraintExprData.o constraintExpr.o constraint.o loopHeuristics.o lsymbolSet.o sigNodeSet.o lslOpSet.o sortSet.o initDeclNodeList.o sortList.o declaratorInvNodeList.o interfaceNodeList.o sortSetList.o declaratorNodeList.o letDeclNodeList.o stDeclNodeList.o storeRefNodeList.o lslOpList.o lsymbolList.o termNodeList.o ltokenList.o traitRefNodeList.o pairNodeList.o typeNameNodeList.o fcnNodeList.o paramNodeList.o programNodeList.o varDeclarationNodeList.o varNodeList.o quantifierNodeList.o replaceNodeList.o importNodeList.o tokentable.o scan.o scanline.o lslparse.o lh.o checking.o lclctypes.o imports.o lslinit.o syntable.o usymtab_interface.o abstract.o ltoken.o lclscanline.o lclsyntable.o lcltokentable.o sort.o symtable.o lclinit.o shift.o lclscan.o lsymbol.o mapping.o -lfl
make[3]: Leaving directory '/home/styx/splint/src'
make[2]: Leaving directory '/home/styx/splint/src'
Making all in lib
make[2]: Entering directory '/home/styx/splint/lib'
../src/splint -nof -nolib +impconj standard.h -dump standard
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
../src/splint -nof -nolib +impconj -DSTRICT standard.h -dump standardstrict
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
../src/splint -nof -nolib +impconj standard.h posix.h -dump posix
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
../src/splint -nof -nolib +impconj -DSTRICT standard.h posix.h -dump posixstrict
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
../src/splint -supcounts -nof -incondefs -nolib +impconj standard.h posix.h unix.h stdio.h stdlib.h -dump unix
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
../src/splint -supcounts -nof -incondefs -nolib +impconj -DSTRICT standard.h posix.h unix.h stdio.h stdlib.h -dump unixstrict
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no warnings
make[2]: Leaving directory '/home/styx/splint/lib'
Making all in imports
make[2]: Entering directory '/home/styx/splint/imports'
LARCH_PATH="../lib:../lib" ../src/splint stdlib.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint assert.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint ctype.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint errno.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint limits.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint locale.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint math.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint setjmp.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint signal.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint stdarg.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint stdio.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint string.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint strings.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
LARCH_PATH="../lib:../lib" ../src/splint time.lcl
Splint 3.1.2 --- 05 Mar 2022
Finished checking --- no code processed
make[2]: Leaving directory '/home/styx/splint/imports'
Making all in doc
make[2]: Entering directory '/home/styx/splint/doc'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/styx/splint/doc'
Making all in test
make[2]: Entering directory '/home/styx/splint/test'
Use make check to run the test suite
make[2]: Leaving directory '/home/styx/splint/test'
make[2]: Entering directory '/home/styx/splint'
make[2]: Leaving directory '/home/styx/splint'
make[1]: Leaving directory '/home/styx/splint' Here is the output from make install : Making install in src
make[1]: Entering directory '/home/styx/splint/src'
make install-am
make[2]: Entering directory '/home/styx/splint/src'
make[3]: Entering directory '/home/styx/splint/src'
/usr/bin/mkdir -p '/usr/local/bin'
/usr/bin/install -c splint '/usr/local/bin'
/usr/bin/install: cannot create regular file '/usr/local/bin/splint': Permission denied
make[3]: *** [Makefile:628: install-binPROGRAMS] Error 1
make[3]: Leaving directory '/home/styx/splint/src'
make[2]: *** [Makefile:976: install-am] Error 2
make[2]: Leaving directory '/home/styx/splint/src'
make[1]: *** [Makefile:970: install] Error 2
make[1]: Leaving directory '/home/styx/splint/src'
make: *** [Makefile:374: install-recursive] Error 1 | Most of your questions are already answered at Why is nullglob not default? . One thing to bear in mind is that: ls -d /some/{path1,path2}/* In csh / tcsh / zsh / bash / ksh (but not fish , see below) is the same as: ls -d /some/path1/* /some/path2/* As the brace expansion is performed before (not as part of ) the glob expansion, it's the shell that expands those /some/pathx/* patterns into the list of matching files to pass as separate arguments to ls . bash , like ksh which it mostly copied has inherited a misfeature introduced by the Bourne shell in that, when a glob pattern doesn't match any file, it is passed as-is, literally as an argument to the command. So, in those shells, if /some/path1/* matches at least one file and /some/path2/* matches none, ls will be called with -d , /some/path1/file1 , /some/path1/file2 and a literal /some/path2/* as arguments. As the /some/path2/* file doesn't exist, ls will report an error. csh behaves like in the early Unix versions, where globs were not performed by the shell but by a /etc/glob helper utility (which gave their name to globs). That helper would perform the glob expansions before invoking the command and report a No match error without running the command if all the glob patterns failed to match any file. Otherwise, as long as there was at least one glob with matches, all the non-matching ones would simply be removed. So in our example above, with csh / tcsh or the ancient Thompson shell and its /etc/glob helper, ls would be called with -d , /some/path1/file1 and /some/path1/file2 only, and would likely succeed (as long as /some/path1 is searchable). zsh is both a Korn-like and csh-like shell. It does not have that misfeature of the Bourne shell whereby unmatched globs are passed as is¹, but by default is stricter than csh in that, all failing globs are considered as a fatal error. So in zsh , by default, if either /some/path1/* or /some/path2/* (or both) fails to match, the command is aborted. A similar behaviour can be enabled in the bash shell with the failglob option². That makes for more predictable / consistent behaviours but means that you can run into that problem when you want to pass more than one glob expansion to a command and would not like it to fail as long as one of the globs succeeds. You can however set the cshnullglob option to get a csh-like behaviour (or emulate csh ). That can be done locally by using an anonymous function: () { set -o localoptions -o cshnullglob; ls -d /some/{path1,path2}/*; } Or just using a subshell: (set -o cshnullglob; ls -d /some/{path1,path2}/*) However here, instead of using two globs, you could use one that matches all of them using the alternation glob operator: ls -d /some/(path1|path2)/* Here, you could even do: ls -d /some/path[12]/* In bash , you can enable the extglob option for bash to support a subset of ksh's extended glob operator, including alternation: (shopt -s extglob; ls -d /some/@(path1|path2)/*) Now, because of that misfeature inherited from the Bourne shell, if that glob doesn't match any file, /some/@(path1|path2)/* would be passed as-is to ls and ls could end up listing a file called literally /some/@(path1|path2)/* , so you'd also want to enable the failglob option to guard against that: (shopt -s extglob failglob; ls -d /some/@(path1|path2)/*) Alternatively, you can use the nullglob option (which bash copied from zsh ) for all non-matching globs to expand to nothing. But: (shopt -s nullglob; ls -d /some/path1/* /some/path2/*) Would be wrong in the special case of the ls command, which, if not passed any argument lists . . You could however use nullglob to store the glob expansion into an array, and only call ls with the member of the arrays as argument if it is non-empty: (
shopt -s nullglob
files=( /some/path1/* /some/path2/* )
if (( ${#files[@]} > 0 )); then
ls -d -- "${files[@]}"
else
echo >&2 No match
exit 2
fi
) In zsh , instead of enabling nullglob globally, you can enable it on a per-glob basis with the (N) glob qualifier (which inspired ksh's ~(N) , not copied by bash yet), and use an anonymous function again instead of an array: () {
(( $# > 0 )) && ls -d -- "$@"
} /some/path1/*(N) /some/path2/*(N) The fish shell now behaves similarly to zsh where failing globs cause an error, except when the glob is used with for , set (which is used to assign arrays) or count where it behaves in a nullglob fashion instead. Also, in fish , the brace expansion though not a glob operator in itself is done as part of globbing, or at least a command is not aborted when brace expansion is combined with globbing and at least one element can be returned. So, in fish : ls -d /some/{path1,path2}/* Would end up in effect behaving like in csh . Even: {ls,-d,/xx*} Would result in ls being called with -d alone if /xx* was not matched instead of failing (behaving differently from csh in this instance). In any case, if it's just to print the matching file paths, you don't need ls . In zsh , you could use its print builtin to print in columns: print -rC3 /some/{path1,path2}/*(N) Would print the paths r aw on 3 columns (and print nothing if there's no match with the N ullglob glob qualifier). If instead you want to check if there's at least one non-hidden file in any of those two directories, you can do: # bash
if (shopt -s nullglob; set -- /some/path1/* /some/path2/*; (($#))); then
echo yes
else
echo no
fi Or using a function: has_non_hidden_files() (
shopt -s nullglob
set -- "$1"/*
(($#))
)
if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2
then
echo yes
else
echo no
fi # zsh
if ()(($#)) /some/path1/*(N) /some/path2/*(N); then
echo yes
else
echo no
fi Or with a function: has_non_hidden_files() ()(($#)) $1/*(NY1)
if has_non_hidden_files /some/path1 || has_non_hidden_files /some/path2
then
echo yes
else
echo no
fi ( Y1 as an optimisation to stop after finding the first file) Beware those has_non_hidden_files would (silently) return false for directories that are not readable by the user (whether they have files or not). In zsh , you could detect this kind of situation with its $ERRNO special variable. ¹ The Bourne behaviour (which was specified by POSIX) can be enabled though in zsh by doing emulate sh or emulate ksh or with set +o nomatch ² beware there are significant differences in behaviour as to what exactly is cancelled when the glob doesn't match, the fish behaviour being generally the more sensible, and the bash -O failglob probably the worst | {
"source": [
"https://unix.stackexchange.com/questions/693025",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/512432/"
]
} |
693,160 | For a class on cryptography, I am trying to drain the entropy pool in Linux (e.g. make /proc/sys/kernel/random/entropy_avail go to 0 and block a command reading from /dev/random ) but I can't make it happen. I'm supposed to get reads from /dev/random to block. If I execute these two commands: watch -n 0.5 cat /proc/sys/kernel/random/entropy_avail to watch entropy and then: od -d /dev/random to dump the random pool, the value from the watch command hovers between 3700 and 3900, and gains and loses only a little while I run this command. I let both commands run for about three minutes with no discernible substantial change in the size of entropy_avail . I didn't do much on the computer during that time. From googling around I find that perhaps a hardware random number generator could be so good that the entropy won't drop but if I do: cat /sys/devices/virtual/misc/hw_random/rng_available I see nothing, I just get a blank line. So I have a few questions: What's replenishing my entropy so well, and how can I find the specific source of randomness? Is there any way to temporarily disable sources of randomness so I can force this blocking to happen? | There is a surprising amount of development going on around the Linux random device. The slow, blocking /dev/random is gone and replaced by a fast /dev/random that never runs out of data. You'll have to travel back in time, like prior to linux 4.8 ( which introduced a much faster crng algorithm ) or possibly linux 5.6 ( which introduced jitter entropy generation ). There is no way to get the original behavior back in current kernels. If you are seeing this issue in older versions of Linux, hwrng aside, you might be using haveged or rng-tools rngd , or similar userspace entropy providers. Some distros install these by default to avoid hangs while waiting for a few random bits, in that case you can uninstall or disable them or try it from within an initrd / busybox shell where no other processes are running. If the issue still persists, you might just have a very noisy piece of hardware from which kernel keeps collecting entropy naturally. | {
"source": [
"https://unix.stackexchange.com/questions/693160",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/517039/"
]
} |
693,760 | I am writing a shell script and I need to print the nth argument of the script. For example,
suppose we have n=3 and our script is run with enough arguments.
Now I need to print the n th argument, i.e. $3 . But if n=2 , we would print argument $2 . I don't want to use if statements. I wanted to do something like echo $($n) but the above doesn't work the way I need it to. | By chronological order, in various shells: csh (late 70s): $argv[$n] (also works in zsh since 1991 and fish since 2005) zsh (1991): $argv[n] / $@[n] / $*[n] (the last too also supported by yash but only with extra braces: ${@[n]} / ${*[n]} ) rc (early 90s): $$n / $*($n) (also works in es, akanga) ksh93 (1993): ${@:n:1} , ${*:n:1} (also supported by bash since 1996; zsh also since 2010 , though you need ${@:$n:1} or ${@: n:1} there to avoid conflict with csh-style modifiers and see there about the "$*" case) bash (1996): ${!n} zsh ( 1999 ): ${(P)n} . Remember that in ksh93/bash/yash, you need to quote parameter expansions in list contexts at least, and csh can hardly be used to write reliable code. In bash , there's a difference between "${!n}" and "${@:n:1}" in list context when the n th positional parameter is not set in that the latter then expands to no argument at all whilst the former expands to one empty element. In Bourne-like shells (but not the Bourne shell where that won't work for indices past the 9th), and with standard POSIX sh syntax, you can also do: eval "nth=\${$n}" There will also be differences in behaviour among all those if $n does not contain the canonical decimal representation of an integer number strictly greater than 0. If you can't guarantee that will be the case, using most of those (not just the eval one) would introduce an arbitrary command execution vulnerability (the only exceptions being with rc and maybe csh above). Also remember that except in zsh (with echo -E - $argv[n] ), yash (with ECHO_STYLE=raw echo "${*[$n]}" ) and fish (with echo -- $argv[$n] ), echo can't be used to output arbitrary data, use printf '%s\n' ... instead ). | {
"source": [
"https://unix.stackexchange.com/questions/693760",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/268128/"
]
} |
696,303 | I'm typing that command and my cursor is at the end of "supprimer des warnings" $ git commit -m "Nettoyage :
> - Suppression de sources ou projets inutiles
> - Corrections mineures sur les sources pour supprimer des warnings" It's the time I notice that I should have written "Nettoyage (deuxième partie)" at the beginning of my commit message. ...but how, being at the last line of my command, may I go up to the beginning of it, to edit it, on its first line? | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"source": [
"https://unix.stackexchange.com/questions/696303",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/350549/"
]
} |
696,328 | I just finished updating the linux kernel via APT and restarted my machine. Then I checked for more updates and it said this: The following packages were automatically installed and are no longer required:
linux-headers-5.4.0-100 linux-headers-5.4.0-100-generic linux-image-5.4.0-100-generic linux-modules-5.4.0-100-generic linux-modules-extra-5.4.0-100-generic
Use 'sudo apt autoremove' to remove them. Should I use autoremove or not? | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"source": [
"https://unix.stackexchange.com/questions/696328",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/519245/"
]
} |
696,335 | I am relatively new to Linux. I have installed Endeavour OS on my laptop (an HP Victus 16), and noticed underwhelming performance on apps like waydroid . It seems like linux is only detecting the iGPU in my system. When I run xrandr --listproviders it gives me the output Providers: number : 0** ! Even going to Settings > About shows the graphics card as "AMD Renoir" only. Running lspci shows the dGPU connected as: Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Navi 14 [Radeon RX 5500/5500M / Pro 5500M] (rev c1)** but it seems like it doesn't work anywhere else? Configuration of my laptop if it matters: AMD Ryzen 5600h
16 GB RAM
AMD RX 5500M graphics And the OS details: Endeavour OS Linux x86_64
Kernel: 5.17.0-247-tkg-pds | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"source": [
"https://unix.stackexchange.com/questions/696335",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/519248/"
]
} |
696,345 | Hello I have a file called users. In that file i have a list of users for example user1
user2
user3 Now i have another file called searches where there is a specific string called owner = user for example owner = user1
random text
random text
owner = user15
random text
random text
owner = user2 so is it possible to find all the users based on the users file and rename those users to [email protected] ? for example owner = [email protected]
random text
random text
owner = user15
random text
random text
owner = [email protected] i got some bits and pieces working using the ack command and the cat command but i am new to programming so i cant get a proper output. What i figured out is below but it does not really do what i need. any help is highly appreciated. cat users | xargs -i sed 's/{}/moo/' searches | Unfortunately, command entry in Bash is line-oriented, and you can’t go back to a previous line while entering a multi-line command. What you can do however is start an editor with the full command entered so far. To do so, in Emacs mode (the default), press Ctrl x Ctrl e ; in vi mode, press Esc v . This will open your editor with everything you’ve entered so far; fix what needs fixing, complete the command , exit the editor and Bash will run the edited command. In this particular case you could use an editor for the entire git commit message: omit the -m option and git will start an editor for you. | {
"source": [
"https://unix.stackexchange.com/questions/696345",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469475/"
]
} |
697,256 | I was thinking back to my introduction to programming recently and remembered writing a C++ program that deliberately read and wrote to memory addresses at random. I did this to see what would happen. To my surprise, on my Windows 98 PC, my program would create some really weird side effects. Occasionally it would toggle OS settings, or create graphical glitches. More often than not it would do nothing or just crash the entire system. I later learned this was because Windows 98 didn't restrict what a user process had access to. I could read and write to RAM used by other processes and even the OS. It is my understanding that this changed with Windows NT (though I think it took a while to get right). Now Windows prevents you from poking around in RAM that doesn't belong to your process. I vaguely remember running my program on a Linux system later on and not getting nearly as many entertaining results. If I understand correctly this is, at least in part, due to the separation of User and Kernel space. So, my question is: Was there a time when Linux did not separate User and Kernel space?
In other words, was there a time when my rogue program could have caused similar havoc to a Linux system? | Linux has always protected the kernel by preventing user space from directly accessing the memory it uses; it has also always protected processes from directly accessing each others’ memory. Programs can only access memory through a virtual address space which gives access to memory mapped for them by the kernel; access outside allocated memory results in a segmentation fault. (Programs can access the kernel through system calls and drivers, including the infamous /dev/mem and /dev/kmem ; they can also share memory with each other.) Is the MMU inside of Unix/Linux kernel? or just in a hardware device with its own memory? explains how the kernel/user separation is taken care of in Linux nowadays (early releases of Linux handled this differently; see Linux Memory Management Overview and 80386 Memory Management for details). Some Linux-related projects remove this separation; for example the Embeddable Linux Kernel Subset is a subset of Linux compatible with the 8086 CPU, and as a result it doesn’t provide hardware-enforced protection. µClinux provides support for embedded systems with no memory management unit, and its core “ingredients” are now part of the mainline kernel, but such configurations aren’t possible on “PC” architectures. | {
"source": [
"https://unix.stackexchange.com/questions/697256",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/239497/"
]
} |
697,825 | The OpenSSH client has a command line option for port forwarding, used like this: ssh -L localport:server:serverport user@host which will connect to host as user , and at the same time redirecting localport on the client to serverport on server (which can be host or anything reachable from host over the network). Now suppose I have SSHed into host doing just ssh user@host and in the middle of the session I realize I forgot to forward the port. Alas, I am in the middle of something, so I don’t just want to log out and re-establish the SSH connection with the port forwarding. Is there a way to add port forwarding to a running SSH session? | From man 1 ssh : ESCAPE CHARACTERS When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character. A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below. The escape character must always follow a newline to be interpreted as special. The escape character can be changed in configuration files using the EscapeChar configuration directive or on the command line by the -e option. The supported escapes (assuming the default ~ ) are: […] ~C Open command line. Currently this allows the addition of port forwardings using the -L , -R and -D options (see above). […] Basic help is available, using the -h option. So type Enter ~ C (i.e. capital c), then -L localport:server:serverport with desired localport , server and serverport , finally Enter . Notes: The initial Enter will be immediately sent to the remote side and may cause some action there, so pick a good moment (e.g. when you're in a shell with an empty command line). Or if you are sure the last thing you have typed is Enter anyway (e.g. you have just invoked a command that is now running), you can start directly with ~ because Enter has already been noticed by your local ssh . On internationalized keyboards the tilde could be a dead key for generating special 'tilded' characters (like pressing ~ n to generate ñ ). In that case, it could be necessary to press SPACE after ~ to generate a single tilde, i.e: ENTER ~ SPACE C . In the case of the Spanish/LA keyboard layouts, as there is no combined character using tilde and C, the space can be omitted and the ~ C generates the desired sequence. Regarding multiple redirections, the ssh escaped command line only accepts a single command. You should press again the keyboard sequence to enter another command or redirection. | {
"source": [
"https://unix.stackexchange.com/questions/697825",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/91283/"
]
} |
698,627 | I'm doing some processing trying to get how many different lines in a file containing 160,353,104 lines. Here is my pipeline and stderr output. $ tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |\
sort -T. -S1G | tqdm --total=160353104 | uniq -c | sort -hr > users
100%|████████████████████████████| 160353104/160353104 [0:15:00<00:00, 178051.54it/s]
79%|██████████████████████ | 126822838/160353104 [1:16:28<20:13, 027636.40it/s]
zsh: done tail -n+2 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |
zsh: killed sort -T. -S1G |
zsh: done tqdm --total=160353104 | uniq -c | sort -hr > users My command-line PS1 or PS2 printed the return codes of all process of the pipeline. ✔ 0|0|0|KILL|0|0|0 First char is a green checkmark that means that last process returned 0 (success). Other numbers are return code for each one of pipelined processes, in same order. So I've notice that my fourth command got KILL status, this is my sort command sort -T. -S1G setting local directory to temp storage and buffer up to 1GiB. The question is, why did it returned KILL, does it means something sent a KILL SIGN to it?
Is there a way to know "who killed" it? Updates After reading Marcus Müller Answer , first I've tried to load the data into Sqlite. So, maybe this is a good moment to tell you that, no, don't use a CSV-based data flow. A simple sqlite3 place.sqlite and in that shell (assuming your CSV has a title row that SQLite can
use to determine the columns) (of course, replace $second_column_name
with the name of that column) .import 022_place_canvas_history.csv canvas_history --csv
SELECT $second_column_name, count($second_column_name) FROM canvas_history
GROUP BY $second_column_name; This was taking a lot of time, so I leave it processing and went to do other things. While it I thought more about this other paragraph from Marcus Müller Answer : You just want to know how often each value appeared on the second column. Sorting that before just happens because your tool ( uniq -c ) is bad, and needs the rows to be sorted before (there's literally no good reason for that. It's just not implemented that it could hold a map of values and their frequency and increase that as they appear). So I thought, I can implement that. When I got back into computer, my Sqlite import process had stopped cause of a SSH Broken Pip, think as it didn't transmit data for a long time it closed the connection.
Ok, what a good opportunity to implement a counter using a dict/map/hashtable. So I've write the follow distinct file: #!/usr/bin/env python3
import sys
conter = dict()
# Create a key for each distinct line and increment according it shows up.
for l in sys.stdin:
conter[l] = conter.setdefault(l, 0) + 1 # After Update2 note: don't do this, do just `couter[l] = conter.get(l, 0) + 1`
# Print entries sorting by tuple second item ( value ), in reverse order
for e in sorted(conter.items(), key=lambda i: i[1], reverse=True):
k, v = e
print(f'{v}\t{k}') So I've used it by the follow command pipeline. tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 | ./distinct > users2 It was going really really fast, projection of tqdm to less than 30 minutes, but when got into 99% it was getting slower and slower. This process was using a lot of RAM, about 1.7GIB. Machine I'm working with this data, the machine I have storage enought, is a VPS with just 2GiB RAM and ~1TiB storage. Thought it may be getting so slow cause SO was having to handle these huge memory, maybe doing some swap or other things.
I've waited anyways, when it finally got into 100% in tqdm, all data was sent into ./distinct process, after some seconds got the follow output: 160353105it [30:21, 88056.97it/s]
zsh: done tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 |
zsh: killed ./distinct > users2 This time mostly sure cause by out-of-memory-killer as spotted in Marcus Müller Answer TLDR section. So I've just checked and I don't have swap enabled in this machine. Disabled it after complete its setup with dmcrypt and LVM as you may get more information in this answers of mine . So what I'm thinking is to enable my LVM swap partition and trying to run it again. Also at some moment I think that I've seen tqdm using 10GiB of RAM. But I'm pretty sure I've seen wrongly or btop output mixed up, as latter it showed only 10MiB, don't think tqdm would use much memory as it just counts and updates some statics when reading a new \n . In Stéphane Chazelas comment to this question they say: The system logs will possibly tell you. I would like to know more about it, should I find something in journalctl? If so, how to do it? Anyways, as Marcus Müller Answer says, loading the csv into Sqlite may be by far the most smart solution, as it will allow to operate on data in may ways and probably has some smart way to import this data without out-of-memory. But now I'm twice curious about how to find out why as process was killed, as I want to know about my sort -T. -S1G and now about my ./distinct , the last one almost sure it was about memory. So how to check for logs that says why those process were killed? Update2 So I've enabled my SWAP partition and took Marcus Müller suggestion from this question comment. Using pythons collections.Counter. So my new code ( distinct2 ) looks like this: #!/usr/bin/env python3
from collections import Counter
import sys
print(Counter(sys.stdin).most_common()) So I've run Gnu Screen to even if I get a broken pipe again I could just resume the session, than run it in the follow pipeline: tail -n+1 2022_place_canvas_history.csv | cut -d, -f2 | tqdm --total=160353104 --unit-scale=1 | ./distinct2 | tqdm --unit-scale=1 > users5 That got me the follow output: 160Mit [1:07:24, 39.6kit/s]
1.00it [7:08:56, 25.7ks/it] As you can see it took way more time to sort the data than to count it.
One other thing you may notice is that tqdm second line output shows just 1.00it, it means it got just a single line. So I've checked the user5 file using head: head -c 150 users5
[('kgZoJz//JpfXgowLxOhcQlFYOCm8m6upa6Rpltcc63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795), ('JMlte6XKe+nnFvxcjT0hHDYYNgiDXZVOkhr6KT60EtJAGa As you can see, it printed the entire list of tuples in a single line. For solving this I've used the good old sed as follow sed 's/),/)\n/g' users5 > users6 . After it I've checked users6 content using head, as follow with its output: $ head users6
[('kgZoJz/...c63K6Cz0vEWJF/RYmlsaXsIQEbXrwz+Il3BkD8XZVx7YMLQ==\n', 795)
('JMlte6X...0EtJAGaezxc4e/eah6JzTReWNdTH4fLueQ20A4drmfqbqsw==\n', 781)
('LNbGhj4...apR9YeabE3sAd3Rz1MbLFT5k14j0+grrVgqYO1/6BA/jBfQ==\n', 777)
('K54RRTU...NlENRfUyJTPJKBC47N/s2eh4iNdAKMKxa3gvL2XFqCc9AqQ==\n', 767)
('8USqGo1...1QSbQHE5GFdC2mIK/pMEC/qF1FQH912SDim3ptEFkYPrYMQ==\n', 767)
('DspItMb...abcd8Z1nYWWzGaFSj7UtRC0W75P7JfJ3W+4ne36EiBuo2YQ==\n', 766)
('6QK00ig...abcfLKMUNur4cedRmY9wX4vL6bBoV/JW/Gn6TRRZAJimeLw==\n', 765)
('VenbgVz...khkTwy/w5C6jodImdPn6bM8izTHI66HK17D4Bom33ZrwuGQ==\n', 758)
('jjtKU98...Ias+PeaHE9vWC4g7p2KJKLBdjKvo+699EgRouCbeFjWsjKA==\n', 730)
('VHg2OiSk...3c3cr2K8+0RW4ILyT1Bmot0bU3bOJyHRPW/w60Y5so4F1g==\n', 713) Good enough to work latter. Now I think I should add an update after trying to check who killed my sort using dmesg ou journalctl. I'm also wondering if there is a way to make this script faster. Maybe creating a threadpool, but have to check pythons dict behavior, also thought about other data-structures as the column I'm counting is fixed width string, maybe using a list to storage the frequency of each different user_hash. Also I read the python implementation of Counter, it's just a dict, pretty much same implementation I had before, but instead of using dict.setdefault just used dict[key] = dict.get(key, 0) + 1 , it was a miss-usage of setdefault no real need for this scenario. Update3 So I'm getting so deep in the rabbit hole, totally lost focus of my objective. I started search for faster sorting, maybe write some C or Rust, but realized that already have the data I came for processed. So I'm here to show dmesg output and one final tip about the python script. The tip is: may be better to just count using dict or Counter, than sort its output using gnu sort tool. Probably sort sorts faster than python sorted buitin function. About dmesg, it was pretty simple to find out of memory, just did a sudo dmesg | less press G to go all way down, than ? to search back, than searched for Out string. Found two of them, one for my python script and another to my sort, the one that started this question. Here is those outputs: [1306799.058724] Out of memory: Killed process 1611241 (sort) total-vm:1131024kB, anon-rss:1049016kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:2120kB oom_score_adj:0
[1306799.126218] oom_reaper: reaped process 1611241 (sort), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
[1365682.908896] Out of memory: Killed process 1611945 (python3) total-vm:1965788kB, anon-rss:1859264kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:3748kB oom_score_adj:0
[1365683.113366] oom_reaper: reaped process 1611945 (python3), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB That's it, thank you so much for helping so far, hope it help others too. | TL;DR: out-of-memory-killer or running out of disk space for temporary files kills sort . Recommendation: Use a different tool. I've had a glance over GNU coreutils' sort.c right now¹. Your -S 1G just means that the sort process tries to allocate memory in a chunk of 1GB, and will fall back to increasingly smaller sizes if that is not possible. After having exhausted that buffer, it will create a temporary file to store the already sorted lines², and sort the next chunk of input in-memory. After all input has been consumed, sort will then merge/sort two of the temporary file into a temporary file (mergesort-style), and successively merge all the temporaries until the merging will yield the total sorted output, which is then output to stdout . That's clever, because it means you can sort input larger than available memory. Or, it's clever on systems where these temporary files are not themselves held in RAM, which they typically are these days ( /tmp/ is typically a tmpfs , which is a RAM-only file system). So, writing these temporary files eats exactly the RAM you're trying to save, and you're running out of RAM: your file has 160 million lines, and a quick google suggests it's 11GB of uncompressed data. You can "help" sort around that by changing the temporary directory it uses. You're already doing that, -T. , placing the temporary files in your current directory. Might be you're running out of space there? Or is that current directory on tmpfs or similar? You've got a CSV file with an medium amount of data (160 million rows is not that much data for a modern PC). Instead of putting that into a system meant to deal with that much data, you're trying to operate on it with tools from the 1990s (yes, I just read sort git history), when 16 MB RAM seemed quite generous. CSV is just the wrong data format for processing any significant amount of data, and your example is the perfect illustration of that. Inefficient tooling working on inefficient data structure (a text file with lines) in inefficient ways to achieve a goal with an inefficient approach: You just want to know how often each value appeared on the second column. Sorting that before just happens because your tool ( uniq -c ) is bad, and needs the rows to be sorted before (there's literally no good reason for that. It's just not implemented that it could hold a map of values and their frequency and increase that as they appear). So, maybe this is a good moment to tell you that, no, don't use a CSV-based data flow. A simple sqlite3 place.sqlite and in that shell (assuming your CSV has a title row that SQLite can use to determine the columns) (of course, replace $second_column_name with the name of that column) .import 022_place_canvas_history.csv canvas_history --csv
SELECT $second_column_name, count($second_column_name)
FROM canvas_history
GROUP BY $second_column_name; is likely to be as fast, and bonus, you get an actual database file place.sqlite . You can play around with that much more flexibly – for example, create a table where you extract coordinates, and convert the times to numerical timestamps, and then be much faster and more flexible by what you analyze. ¹ The globals, and the inconsistency on what is used when. They hurt. It was a different time for C authors. And it's definitely not bad C, just ... not what you're used to from more modern code bases. Thanks to Jim Meyering and Paul Eggert for writing and maintaining this code base! ² you can try to do the following: sort a file that's not too massive, say, ls.c with say has 5577 lines, and record the number of files opened: strace -o /tmp/no-size.strace -e openat sort ls.c
strace -o /tmp/s1kB-size.strace -e openat sort -S 1 ls.c
strace -o /tmp/s100kB-size.strace -e openat sort -S 100 ls.c
wc -l /tmp/*-size.strace | {
"source": [
"https://unix.stackexchange.com/questions/698627",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/182552/"
]
} |
698,919 | I'm looking for a way to nullify the undesirable behaviour of some installers that append code to .bashrc to force-load their environment automatically. The problem cropped-up a few times, mostly with Conda, and in some cases the user ended-up with a broken account that prevented them from logging in anymore. I tried to add an unclosed here-document at the end of .bashrc, like this: # .bashrc
#...
: <<'__END__' Which works, but generates parsing errors annoying warnings. What would be a clean way to do that (without making the .bashrc readonly)? | If you end your .bashrc with return 0 Bash will ignore any lines added after that , since .bashrc is handled like a sourced script: return may also be used to terminate execution of a script being executed with the . ( source ) builtin, returning either n or the exit status of the last command executed within the script as the exit status of the script. ( exit 0 causes the shell to exit, which isn’t what you want.) | {
"source": [
"https://unix.stackexchange.com/questions/698919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/504798/"
]
} |
699,021 | According with the FHS about /dev at: 6.1.3. /dev : Devices and special files it contains: The following devices must exist under /dev.
/dev/null
All data written to this device is discarded. A read from this device will return an EOF condition.
/dev/zero
This device is a source of zeroed out data. All data written to this device is discarded. A read from this device will return as many bytes containing the value zero as was requested.
... Observe that both have: All data written to this device is discarded I read many tutorials where /dev/null is always used to discard data. But because both have the same purpose about writing (discard) Question When is mandatory use /dev/zero over /dev/null for write / discard purpose? BTW for other differences - practically mostly about read - we have available: Difference between /dev/null and /dev/zero | If you're using Linux, it's never "mandatory" to redirect to /dev/null instead of /dev/zero . As you've noticed, you'll get the same result either way. That said, you should always redirect to /dev/null if you're discarding data. Because everyone understands that writing to /dev/null means throwing the data away; it's expressing your intention. On the other hand, writing to /dev/zero also throws your data away, but it's not immediately obvious that that's what you're trying to do. Besides that, I'd be concerned whether writes to /dev/zero are allowed on other Unices, like the BSDs etc. I don't think /dev/zero is even required by POSIX, while /dev/null is . So using /dev/null for its intended purpose is maximally portable; doing anything else is sacrificing portability for no gain. | {
"source": [
"https://unix.stackexchange.com/questions/699021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/383045/"
]
} |
699,386 | As discussed in Understanding UNIX permissions and file types , each file has permission settings ("file mode") for: the owner / user (" u "), the owner's group (" g "), and everyone else (" o "). As far as I understand, the owner of a file can always change the file's permissions using chmod . So can any application running under the owner. What is the reason for restricting the owner's own permissions if they can always change them? The only use I can see is the protection from accidental deletion or execution, which can be easily overcome if intended. A related question has been asked here: Is there a reason why 'owner' permissions exist? Aren't group permissions enough? It discusses why the owner's permissions cannot be replaced by a dummy group consisting of a single user (the owner). In contrast, here I am asking about the purpose of having permissions for the owner in principle , no matter if they are implemented through a separate " u " octal or a separate group + ACLs. | There are various reasons to reduce the owner's permissions (though rarely to less than that of the group). The most common is not having execute permission on files not intended to be executed. Quite often, shell scripts are fragments intended to be sourced from other scripts (e.g. your .profile ) and don't make sense as top-level processes. Command completion will only offer executable files, so correct permissions helps in interactive shells. Accidentally overwriting a file is a substantial risk - it can happen through mistyping a command, or even more easily in GUI programs. One of the first things I do when copying files from my camera is to make them (and their containing directory) non-writeable, so that any edits I make must be copies, rather than overwriting the original. Sometimes it's important that files are not even readable. If I upgrade my Emacs and have problems with local packages in my ~/lisp directory, I selectively disable them (with chmod -r ) until it can start up successfully; then I can make them readable one at a time as I fix compatibility problems. A correct set of permissions for user indicates intentionality . Although the user can change permissions, well-behaved programs won't do that (at least, not without asking first). Instead of thinking of the permissions as restricting the user , think of them as restricting what the user's processes can do at a given point in time. | {
"source": [
"https://unix.stackexchange.com/questions/699386",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/419627/"
]
} |
700,151 | I lost some files by using the mv command. I don't know where they are. They are not in the directory to which I intended to copy them. Below is a transcript of what I did: samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ cd
samuelcayo@CAYS07019906:~$ ls
Desktop Documents Downloads GameShell Music Pictures pratice Public Templates Videos
samuelcayo@CAYS07019906:~$ mkdir tp2
samuelcayo@CAYS07019906:~$ ls
Desktop Documents Downloads GameShell Music Pictures pratice Public Templates tp2 Videos
samuelcayo@CAYS07019906:~$ cd Downloads/221-tp2-public-main/
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ ls
backup copybash Dockerfile ntfy-1.16.0 packets.txt README.md restore secret
cloud data Dockerfile_CAYS07019906 ntfy.zip rapport-tp2.md remplacer.sed sauvegarde.sh tail
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv rapport-tp2.md tp2
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv Dockerfile_CAYS07019906 tp2
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv packets.txt tp2
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ mv sauvegarde.sh tp2
samuelcayo@CAYS07019906:~/Downloads/221-tp2-public-main$ cd
samuelcayo@CAYS07019906:~$ cd tp2/
samuelcayo@CAYS07019906:~/tp2$ ls
samuelcayo@CAYS07019906:~/tp2$ ls -l
total 0
samuelcayo@CAYS07019906:~/tp2$ cd .. | You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere. | {
"source": [
"https://unix.stackexchange.com/questions/700151",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/523343/"
]
} |
700,190 | I have an attach volume that gets a snapshot every hour. In order to test the snapshot performance, I need to run a process that will, between snapshot backups, generate a large amount of "churn" or file change. There are two questions related to this: simply and obviously, how to generate large blocks of text EFFICIENTLY and write them to disc. With my limited knowledge about the only thing I can think of is a for loop generating random characters, but that's probably extremely slow. Also, the new randomness if replacing a file has to be such that the snapshot essentially has no patterns to match. what is the most effective way to store this? e.g. 1 Gigabyte in 1000 files, or 100 GB in 10 files Since a picture is worth 1K words, I drew up this conceptually: Thanks in advance for insight on coupling tools-to-use with insight on the file system. | You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere. | {
"source": [
"https://unix.stackexchange.com/questions/700190",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/104388/"
]
} |
700,198 | I couldn't find any guide or documentation referring on how to add files to a custom Alpine Linux ISO, the nearest i could find is this page on the Alpine Wiki about creating a custom ISO image with mkimage I would prefer to have my automated installation scripts and answer files directly on the ISO instead of having to download them through wget | You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere. | {
"source": [
"https://unix.stackexchange.com/questions/700198",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/274181/"
]
} |
700,200 | I've got a small SRE lab set up using various Vagrant boxes (VirtualBox backend). I usually work on a Debian or Archlinux box and attach to a Windows box via remote debugging. On my Linux boxes, X11 forwarding is enabled and works usually. When I try to run Cutter (the official rizin GUI), either from the AppImage or unpacked, I receive the following error: The X11 connection broke: No error (code 0)
X connection to localhost:10.0 broken (explicit kill or server shutdown). I've never seen something like this before and I can't reproduce it with any other application, AppImage or not. Cutter runs fine locally, other applications run fine via X11 forwarding in the boxes, only this one errors on both, the Debian and the Arch box. Any idea where to start debugging is appreciated :) | You created a directory called tp2 in your home directory, i.e. you created the directory ~/tp2 . You then changed into ~/Downloads/221-tp2-public-main and started to move files with mv . Since you specified the target of each mv operation as tp2 , and since tp2 was not a directory in your current directory, each file you moved was instead renamed tp2 . You overwrote the file previously called tp2 each subsequent time you ran mv . In the end, the tp2 that you were left with is the file previously called sauvegarde.sh . You would have avoided the loss of data by using ~/tp2/ as the target of each mv operation. The ~ refers to your home directory, where you created your tp2 directory. The / at the end of the target path is not strictly necessary, but it makes mv fail gracefully if ~/tp2 is not a directory. As for what you can do now to restore your lost files; consider restoring them from a recent backup if you don't have other copies of them lying around elsewhere. | {
"source": [
"https://unix.stackexchange.com/questions/700200",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/165427/"
]
} |
701,317 | I just learned a trick to create a new file with the cat command. By my testing, if the last line is not followed by a newline, I have to type ctrl+d twice to finish the input, as demonstrated below. [root@192 ~]# cat > test
a
b ctrl+d [root@192 ~]# cat > test
a
b ctrl+d ctrl+d [root@192 ~]# Is this expected? Why this behavior? | Yes, it's expected. We say that Ctrl-D makes cat see "end of file" in the input, and it then stops reading and exits, but that's not really true. Since that's on the terminal, there's no actual "end", and in fact it's not really "end of file" that's ever detected, but any read() of zero bytes. Usually, the read() system call doesn't return zero bytes except when it's known there's no more available, like at the end of a file. When reading from a network socket where there's no data available, it's expected that new data will arrive at some point, so instead of that zero-byte read, the system call will either block and wait for some data to arrive, or return an error saying that it would block. If the connection was shut down, then it would return zero bytes, though.
Then again, even on a file, reading at (or past) the end is not an interminably final end as another process could write something to the file to make it longer, after which a new attempt to read would return more data. (That's what a simple implementation of tail -f would do.) For a lot of use-cases treating "zero bytes read" as "end of file detected" happens to work well enough that they're considered effectively the same thing in practice. What the Ctrl-D does here, is to tell the terminal driver to pass along everything it was given this far, even if it's not a full line yet. At the start of a line, that's all of zero bytes, which is detected as an EOF. But after the letter b , the first Ctrl-D sends the b , and then the next one sends the zero bytes entered after the b , and that now gets detected as the EOF. You can also see what happens if you just run cat without a redirection. It'll look something like this, the parts in italics are what I typed: $ cat foo Ctrl-D foo When Ctrl-D is pressed, cat gets the input foo , prints it back and continues waiting for input. The line will look like foofoo , and there's no newline after that, so the cursor stays there at the end. | {
"source": [
"https://unix.stackexchange.com/questions/701317",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/520451/"
]
} |
702,221 | Trying to learn about UIDs and GIDs. Various online reading led me to believe that my UID is saved in /etc/passwd , but this doesn't appear to be the case on a server where I work: $ whoami
user1
$ cat /etc/passwd | grep user1
$ Is there a(nother) file besides /etc/passwd that could contain my UID? (I'm assuming UID is similar to GID in that there is a file somewhere that contains it. I've found the GID I'm interested in in the file /etc/group ) I know that I can get my UID with the command id -u , but for this question, I'm specifically interested in learning whether there's a file that contains it. | Yes /etc/passwd is one of many ways the user account database can be stored and queried. In many Unix-like systems, the Name Service Switch (initially from Solaris) is responsible for translating some system names to/from ids using a number of methods. Its configuration is usually stored in /etc/nsswitch.conf . In there, you'll find entries for a number of databases and how they are handled (group, passwd, services, hosts, networks...). For the hosts database which is used to translate host names to network protocol addresses, you'll find that DNS and sometimes mDNS are generally queried in addition to /etc/hosts . When a process requests information about a user name such as with the getpwnam() standard function, the methods to use are looked up in that file for the passwd entry. If such a method is the files method, /etc/<db> will be looked up. On GNU systems, that's typically done by some /lib/<system>/libnss_files.so.<version> dynamically loaded module. But you can have many more, such as NIS+, LDAP, SQL. Some of those methods are included with the GNU libc, some can be installed separately. On Debian or derivatives, see the output of apt-cache search 'NSS module' for instance. In enterprise environments, where the user database is centralised, the most popular central DB was NIS, then NIS+ while these days, it's rather LDAP or Microsoft's Active Directory (or its clones for Unix). If present, the get{pw/host/grp}...() functions of the GNU libc will also query a name service caching daemon via /run/nscd/socket instead of invoking the whole NSS stack and query the backend DBs directly. Then the querying will be done by nscd and cached to speed up later queries. Some NSS modules can can also do their caching themselves. On GNU/Linux systems, a popular method is using System Security Services ( sss ). That comes with a separate daemon ( sssd ) that handles the requests and despatches them to other databases (such as LDAP / AD) while also doing some caching. Then /etc/nsswitch.conf will have a sss method for most DBs, and the backends are configured in the sssd configuration. PAM (responsible for authentication) also typically queries sssd in that case. That should help clarify why querying /etc/passwd (or /etc/group or /etc/hosts ...) to get account (or group/host...) information from the command line is wrong in the general case. Most modern systems will have a getent command instead for that (also from Solaris), or more portably, you can use perl 's interface to all the standard get<db>*() functions. $ getent passwd bin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
$ perl -le 'print for getpwnam("bin")'
bin
x
2
2
bin
/bin
/usr/sbin/nologin $ getent services domain
domain 53/tcp
$ perl -le 'print for getservbyname("domain", "tcp")'
domain
53
tcp
$ perl -le 'print for getservbyname("domain", "udp")'
domain
53
udp | {
"source": [
"https://unix.stackexchange.com/questions/702221",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/214773/"
]
} |
703,272 | I have no idea what created this file - I guess a terrible shell script. It is called '.env'$'\r' I have tried various versions of rm , and the technique of opening the direcory with vim ./ , selecting the file, and Shift-D to delete. This didn't work, failing with a **warning** (netrw) delete(/root/squawker/.env) failed!
NetrwMessage [RO]
"NetrwMessage" --No lines in buffer-- How can I delete this pesky file? This is on Ubuntu 20.04 | On recent-ish Linux systems (with GNU tools as in most desktop distributions), ls prints names with weird characters using the shell's quoting syntax. If that '.env'$'\r' is what ls gives, the name of the file is .env<CR> , where <CR> is the carriage-return character. You could get that if you had a shell script with Windows line-endings that ran e.g. whatever > .env . The good thing here is that the output of ls there is directly usable as input to the shell. Well, to Bash, ksh, and zsh at least, not a standard POSIX sh, like Debian/Ubuntu's /bin/sh , Dash. So try with just rm -f '.env'$'\r' Of course rm -f .env? should also work to remove anything named .env plus any one character. Now, of course it's also possible that the filename is literally that, what with the single quotes and backslashes. But that's more difficult to achieve by accident. Even so, rm -f *.env* should work to delete anything with .env in the name. | {
"source": [
"https://unix.stackexchange.com/questions/703272",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/526484/"
]
} |
704,956 | I'm still confused about the concept of kernel and filesystem. Filesystems contain a table of inodes used to retrieve the different files and directories in different memories. Is this inode table part of the kernel? I mean, is the inode table updated when the kernel mounts another filesystem? Or is it part of the filesystem itself that the kernel reads by somehow using a driver and inode table address? | There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode. | {
"source": [
"https://unix.stackexchange.com/questions/704956",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/65878/"
]
} |
704,960 | By default, gawk pads the content to the specified length with space character: root@u2004:~# awk 'BEGIN{printf("|%+5s|\n", "abc")}'
| abc|
root@u2004:~# Is it possible to specify a custom padding character? For example, how can I get |__abc| ? | There is some confusion here because kernel source and documentation is sloppy with how it uses the term 'inode'. The filesystem can be considered as having two parts -- the filesystem code and data in memory, and the filesystem on disk. The filesystem on disk is self contained and has all the non-volatile data and metadata for your files. For most linux filesystems, this includes the inodes on disk along with other metadata and data for the files. But when the filesystem is mounted, the filesystem code also keeps in memory a cached copy of the inodes of files being used. All file activity uses and updates this in memory copy of the inode, so the kernel code really only thinks about this in memory copy, and most kernel documentation doesn't distinguish between the on disk inode and the in memory inode. Also, the in memory inode contains additional ephemeral metadata (like where the cache pages for the file are in memory and which processes have the file open) that is not contained in the on disk copy of the inode. The in memory inode is periodically synchronized and written back to disk. The kernel does not have all the inodes in memory -- just the ones of files in use and files that recently were in use. Eventually inodes in memory get flushed and the memory is released. The inodes on disk are always there. Because file activity in unix is so tightly tied to inodes, filesystems (like vfat) that do not use inodes still have virtual inodes in kernel memory that the filesystem code constructs on the fly. These in memory virtual inodes still hold file metadata that is synchronized to the filesystem on disk as needed. In a traditional unix filesystem, the inode is the key data structure for a file. The filename is just a pointer to the inode, and an inode can have multiple filenames linked to it. In other filesystems that don't use inodes, a file can typically only have one name and the metadata is tied to the filename rather than an inode. | {
"source": [
"https://unix.stackexchange.com/questions/704960",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/520451/"
]
} |
705,447 | I'm used to use cat > /path/to/file << EOF when I, in a bash script, printed more than one line into a file... I was checking old code of my company and I found the cat EOT instruction instead of the cat EOF I'm used to (please notice the T instead of the F at the end of it) and curiosity bit me. I did a quick research and I only found this other question , but I think it was not related to what I wanted to know. I did some tests with the following code: password=hello
cat > ./hello.txt << EOT
authentication {
auth_type PASS
auth_pass $password
}
EOT And I get the exact same output as when I use EOF instead of EOT . The output is, as expected: root@test_VM:~# bash test.sh && cat hello.txt
authentication {
auth_type PASS
auth_pass hello
} So the questions are: What are the differences between the use of EOT and EOF ? When should I use one over the other? | There is no difference, and no particular meaning to those two strings, or any others. It's just an arbitrary terminator and you can use almost any string you like. Of course, the data itself can't contain that particular line, so if your data contains e.g. a shell script that has another here-doc, you'll need to use different terminators in both. Using somewhat descriptive strings may be useful for any future readers of the script. E.g. cat > test.sh <<END_OF_SCRIPT
cat <<EOF
hello
EOF
END_OF_SCRIPT produces test.sh which, when executed through the shell prints hello . There is a difference if you quote the terminator in the line that starts the here-doc, though, it'll prevent expansions in the here-doc data. This prints $i , not whatever the value of the variable is: cat << 'EOF'
$i
EOF See also: 3.6.6 Here Documents in Bash's manual 2.7.4 Here-Document in the POSIX Shell Language description Here Document in wiki.wooledge.org | {
"source": [
"https://unix.stackexchange.com/questions/705447",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/260722/"
]
} |
706,717 | I'd like to know if it's possible to just repeat part of a command.
I.e. if I do ls /path/to/somewhere -a , I only want to remove ls and -a . I know that if I do !! it repeats the previous command (appending the last command to whichever command you write before it) and that if I do !$ it includes the last part of the string, but I'd like to know if it's possible to re-use only the e.g. middle part of the previous command. | Sure, use !^ e.g. $ ls /path/to/somewhere -a
ls: cannot access '/path/to/somewhere': No such file or directory
$ echo !^
echo /path/to/somewhere
/path/to/somewhere
$ Alternatively (incurring an extra keystroke) you could use !:1 . $ ls /path/to/somewhere -a
ls: cannot access '/path/to/somewhere': No such file or directory
$ echo !:1
echo /path/to/somewhere
/path/to/somewhere
$ This is fully documented in the Event Designators and Word Designators sections of the bash man page . | {
"source": [
"https://unix.stackexchange.com/questions/706717",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/469646/"
]
} |
706,729 | I want to remove from the path environment variable all of the enteries that contain a certain word, how can I do that? | Sure, use !^ e.g. $ ls /path/to/somewhere -a
ls: cannot access '/path/to/somewhere': No such file or directory
$ echo !^
echo /path/to/somewhere
/path/to/somewhere
$ Alternatively (incurring an extra keystroke) you could use !:1 . $ ls /path/to/somewhere -a
ls: cannot access '/path/to/somewhere': No such file or directory
$ echo !:1
echo /path/to/somewhere
/path/to/somewhere
$ This is fully documented in the Event Designators and Word Designators sections of the bash man page . | {
"source": [
"https://unix.stackexchange.com/questions/706729",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/530428/"
]
} |
707,670 | I'll start by stating, I'm pretty sure this is a unique mess of my own design, but I hope someone encountered this and might be able to help. The Setup My laptop runs Pop!_OS 22.04 (Based on Ubuntu Jammy). I really like the xscreensaver packages, but the Debian/Ubuntu/Pop!_OS release repos contain an outdated version, and only sid (aka Unstable) contains the updated package * . No fret, that's why pinning exists, and so this is how I have it setup: /etc/apt/preferences.d/unstable-200 file: Package: *
Pin: release a=unstable
Pin-Priority: 200 /etc/apt/preferences.d/xscreensaver-2000 file: Package: xscreensaver*
Pin: release a=unstable
Pin-Priority: 2000 /etc/apt/sources.list.d/debian.sid.list file: deb [arch=amd64] http://http.us.debian.org/debian sid main contrib non-free This actually works, at this point running sudo apt install xscreensaver installs the updated versions.
However, there is a strange side-effect. The problem When I run sudo apt update followed by sudo apt upgrade , I get the following output: Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following packages will be DOWNGRADED:
alsa-topology-conf appmenu-gtk-module-common aspell-en ca-certificates
chrome-gnome-shell dictionaries-common dns-root-data emacsen-common folks-common
fonts-arphic-ukai fonts-noto-cjk fonts-noto-cjk-extra fonts-noto-color-emoji
fonts-urw-base35 friendly-recovery gir1.2-flatpak-1.0 gir1.2-gdkpixbuf-2.0
gir1.2-graphene-1.0 gir1.2-gtksource-4 gir1.2-polkit-1.0 gir1.2-secret-1
gir1.2-soup-2.4 gsfonts gsfonts-x11 hunspell-ar hunspell-de-at-frami
hunspell-de-ch-frami hunspell-de-de-frami hunspell-en-au hunspell-en-ca hunspell-en-gb
hunspell-en-us hunspell-en-za hunspell-es hunspell-fr hunspell-fr-classical hunspell-it
hunspell-pt-br hunspell-pt-pt hunspell-ru hyphen-de hyphen-en-gb hyphen-es hyphen-fr
hyphen-it hyphen-pt-br hyphen-pt-pt ieee-data javascript-common klibc-utils
laptop-detect liba52-0.7.4 libappmenu-gtk2-parser0 libbytesize-common libffi8
libflatpak-dev libgl1 libgles2 libgutenprint-common libgweather-4-0 libio-stringy-perl
libjs-jquery libldacbt-abr2 libmpcdec6 libmysofa1 libopengl0 libpolkit-gobject-1-0
libsndio7.0 libsoup-gnome2.4-1 libtermkey1 libvterm0 libwacom-common libxkbcommon0
mythes-ar mythes-de mythes-de-ch mythes-en-au mythes-en-us mythes-es mythes-fr
mythes-it mythes-pt-pt mythes-ru neovim-runtime netbase pass policykit-1 poppler-data
powermgmt-base printer-driver-all python3-certifi python3-fido2 python3-jinja2
python3-launchpadlib python3-lazr.uri python3-macaroonbakery python3-more-itertools
python3-pkg-resources python3-pyatspi python3-rfc3339 python3-setuptools python3-tz
python3-wheel python3-ykman sensible-utils sgml-base sgml-data sound-icons ssl-cert
tpm-udev ucf update-inetd va-driver-all wamerican wbrazilian wbritish wfrench witalian
wngerman wogerman wspanish wswiss xfonts-base xml-core yubikey-manager
0 upgraded, 0 newly installed, 125 downgraded, 0 to remove and 0 not upgraded.
Need to get 257 MB/283 MB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] This also throws off Pop!_OS Shop's update count, with these packages showing as pending Operating System Updates. Troubleshooting Some data I collected while attempting to troubleshoot this. Removing /etc/apt/sources.list.d/debian.sid.list and running sudo apt update resolves the issue, so I know it's just a miscalculation/flawed logic somewhere. Focusing on the the first package in the list alsa-topology-conf : Although I know the error is completely superficial, at first I thought apt somehow tracks where (which repo) the package came from, so I removed, cleaned-up, then reinstalled the package. Didn't make a difference. sudo apt remove alsa-topology-conf
sudo apt clean
sudo apt update
sudo apt install alsa-topology-conf Running apt policy alsa-topology-conf , the results are: alsa-topology-conf:
Installed: 1.2.5.1-2
Candidate: 1.2.5.1-2
Version table:
*** 1.2.5.1-2 200
200 http://http.us.debian.org/debian sid/main amd64 Packages
100 /var/lib/dpkg/status
1.2.5.1-2 501
501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages It seems that both sid and jammy have the exact same version, and for some reason, apt matches the package to the 200 priority, instead of the 501 priority entry. With /etc/apt/sources.list.d/debian.sid.list removed, the output looks like this: alsa-topology-conf:
Installed: 1.2.5.1-2
Candidate: 1.2.5.1-2
Version table:
*** 1.2.5.1-2 501
501 http://us.archive.ubuntu.com/ubuntu jammy/main amd64 Packages
501 http://us.archive.ubuntu.com/ubuntu jammy/main i386 Packages
100 /var/lib/dpkg/status Related questions The following are related questions with similar situations but none of the answers there helped me understand or resolve this. apt pinning priority restricted Debian 10: Why some SSL packages will be downgraded? How to get rid of "Packages were downgraded and -y was used without --allow-downgrades" apt message I've tried all of the answers in the above questions, but none seems to either be relevant or work out. My question Does anyone have any suggestion on how to reconcile this so that the system will not constantly think that these packages need to be DOWNGRADED ? | The basic answer is that you’re doing something that you shouldn’t, namely mixing repositories across releases (and distribution) . Pulling in Debian packages in an Ubuntu-based distribution is a bad idea. xscreensaver is available in later versions of Ubuntu , which would be less dangerous to use, but even that’s a bad idea. Given all the investigation you’ve done, and the detail you’ve provided, it’s worth explaining the behaviour you’re seeing here. All the packages that are offered for “downgrade” have the shared property of being available in the same version in Debian and Ubuntu; however, they are not the same packages, since all packages imported from Debian are rebuilt in Ubuntu. The first feature of apt which comes into play here is that pin-priorities only choose versions . For any package available in different versions in your repositories, the pin-priorities will distinguish between them. For any package available in the same version in your repositories, they won’t. The next feature then applies: when multiple repositories provide the same version, the first one listed wins . This combines with another feature of apt , which is that a package installed with a given hash will be replaced by a repository package with the same version if the hashes don’t match (there’s a Q&A about that somewhere here, but I can’t find it right now). The result of all this is that for all packages provided by Pop!_OS (Ubuntu under the hood), whose versions in Jammy exactly match the current version in Debian unstable, apt will consider replacing them with the Debian version. I’m not sure why it identifies them as downgrades. If you were to go ahead with this, you’d replace a number of Pop!_OS packages with their Debian “equivalents”; there’s a decent chance that that would actually work, but there’s also the possibility that subtle differences in the libraries used would cause problems. You’d end up with a wholly untested setup. To undo this, you should remove sid.list , update your repositories, and explicitly re-install any package you “downgraded”: sudo apt reinstall alsa-topology-conf | {
"source": [
"https://unix.stackexchange.com/questions/707670",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/138012/"
]
} |
708,115 | I've set up my sudoers so that it asks the root password instead of the user password everytime I use sudo . Mainly because I believe it makes sense that if you want to execute a root command, you should know the root password. However, could this be considered a security risk? And if is not, why isn't this the default configuration in most distros? Edit:
I am running a personal Linux Machine, where I am the only user. Does the rationale make more sense in this context?
I do think that this may not apply to multi-user systems. Context:
My experience with sudo was on systems where sudo was simply a "synonym" for su . One could run any root command by simply typing their user password, which I thought defeated the purpose of root to begin with. Hence my reasoning to have it ask you for the root password.
Having said that I was unaware of the power of sudoers , some users mentioned that you could specify which commands can be run with sudo (while leaving out some commands restricted to the root user only). This I think is a great middle ground | Some would consider this a security risk because it undermines two of the main purposes of using sudo rather than su , which are: sudo makes it easy to allow users to run some, but not all, commands as root, and You don't have to give out the root password . Having the root password is potentially far more dangerous than just being allowed to run certain commands as root. Once someone has the root password, they can either login as root or use it with su . It is also harder to revoke root access from just one person - you have to change the root password and let everyone know what the new password is. With sudo 's default configuration, you only have to change the sudoers file and/or remove the user from the sudo group. This is why it's not the default configuration for sudo . I strongly recommend that you revert back to the default behaviour as it is almost certainly more well thought out than your belief that it "makes sense" that you should have to know the root password. "common sense" is usually neither "common" nor "sensible". sudo was written, at least in part, to avoid the problems caused when everyone who needed some ability to do some root-level sysadmin tasks had to know the root password. In practice, this proved to be extremely problematic, especially in large environments like universities or corporations where people changed roles a lot. I've worked in several environments over the years where people had moved on to other roles in the same organisations years before (or even left the organisation completely) but still had root access on machines that they shouldn't even still have a valid login on. There's also the issue that people often get upset when you do the actually right thing and remove root access and/or disable accounts when those things are no longer needed. | {
"source": [
"https://unix.stackexchange.com/questions/708115",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/531949/"
]
} |
708,733 | July 2022 mac os Monterey V12.1
awk --version 20200816
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin21) Why does awk -F work for most letters, but NOT for the letter t ?
I have the solution, but I would like to understand why awk fails for the letter t . # Count 'e's
% echo "tweeter" | awk -F "e" '{print NF-1}'
3
# Count 'r's
% echo "tweeter" | awk -F "r" '{print NF-1}'
1
# (Attempt to) count 't's
% echo "tweeter" | awk -F "t" '{print NF-1}'
0 <=== ????
# Use gsub()
% echo "tweeter" | awk '{print gsub(/t/, "")}'
2 | Because: Normally, any number of blanks separate fields. In order to set the
field separator to a single blank, use the -F option with a value of [ ] . If a field separator of t is specified, awk treats it as if \t had been specified and uses <TAB> as the field separator. In order
to use a literal t as the field separator, use the -F option with a
value of [t] . That's from the FreeBSD awk man page , and the utilities that come with macOS are usually some old FreeBSD versions or such. $ printf 'foo\tbar\n' | awk -F t '{print NF-1}'
1
$ echo total | awk -F '[t]' '{print NF-1}'
2 In a way, that seems like a useful shorthand for files with tab-separated values, but what with other letters taken as-is, it's confusing. It only works like that with -F , using -v FS=t doesn't do it. The feature is non-POSIX, as POSIX says that -F x is the same as -v FS=x . Most other awks I tested treated t as the the literal letter (some versions of gawk, mawk and Busybox). The version of awk that e.g. Debian has in the original-awk package ("One True AWK" or "BWK awk" presumably from Brian W. Kernighan's initials) does support it, though, and at least Wikipedia seems to indicate that would be the same software FreeBSD uses. This one appears to be based on the version described in the 1988 book "The AWK Programming Language", but I'm not an expert on awk lineages and don't know if it has evolved significantly since then. That one is on github , but the documentation there doesn't seem to describe the feature. The special case can be seen in the code (where it's described as "a wart" in a comment). You can get the same behaviour with GNU awk in BWK-awk compatibility mode, though. : As a special case, in compatibility mode (see section Command-Line Options), if the argument to -F is ‘t’, then FS is set to the TAB character. If you type ‘-F\t’ at the shell, without any quotes, the ‘\’ gets deleted, so awk figures that you really want your fields to be separated with TABs and not ‘t’s. | {
"source": [
"https://unix.stackexchange.com/questions/708733",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532632/"
]
} |
708,958 | My env: zsh, macOS Command in concern: echo 'hi' | tee > a b c echo 'hi' > a b c Command 1 creates files named a , b and c with content hi . Command 2 creates a file named a with content hi b c . AFAIK, only the usage of Command 1 without > is documented in the manpage of tee : echo 'hi' | tee a b c I want some help to understand why adding > the above code(i.e., Command 1) still creates multiple files, whereas Command 2 creates only one file. | Redirection ( > in this case) “consumes” the following argument as the target of the redirection; everything else is left alone. So echo 'hi' | tee > a b c is equivalent to echo 'hi' | tee b c > a tee duplicates its input to b , c , and standard output which goes to a . echo 'hi' > a b c is equivalent to echo 'hi' b c > a and outputs hi b c to standard output, which goes to a . | {
"source": [
"https://unix.stackexchange.com/questions/708958",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/532499/"
]
} |
709,407 | I love (the way) how Linux & Co. lets users install many packages from different repositories.
AFAIK, they come also with source-packages, so you can compile them by yourself. But why even bother to "keep/offer" pre-compiled packages, when you could just compile them yourself? What are the intentions of keeping/offering them? Is it possible to configure Linux, to only download source packages and let the OS do the rest? (Just like a pre-compiled package installation?) Thank you for your answers. | It’s a trade-off: distributions which provide pre-built packages spend the time building them once (in all the configurations they support), and their users can then install them without spending the time to build them. The users accept the distributions’ binaries as-is. If you consider the number of package installations for some of the larger distributions, the time saved by not requiring recompilation everywhere is considerable. There are some distributions which ship source and the infrastructure required to build it, and rely on users to build everything locally; see for example Gentoo . This allows users to control exactly how their packages are built. If you go down this path, even with the time savings you can get by simplifying package builds, you should be ready to spend a lot of time building packages. I don’t maintain the most complex packages in Debian, but one of my packages takes over two hours to build on 64-bit x86 builders , and over twelve hours on slower architectures ! | {
"source": [
"https://unix.stackexchange.com/questions/709407",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/518817/"
]
} |
711,595 | Is it possible to check for a file existence in a crontab oneliner, and only execute a script if that file existed? Pseudocode: * * * * * <if /tmp/signal.txt exists> run /opt/myscript.sh | Use an ordinary test for existence, then run the script if the test succeeds. * * * * * if [ -e /tmp/signal.txt ]; then /opt/myscript.sh; fi or * * * * * if test -e /tmp/signal.txt; then /opt/myscript.sh; fi Or, using the short-circuit syntax. Doing it this way would cause the job to fail if the file does not exist (which may trigger an email from the cron daemon): * * * * * [ -e /tmp/signal.txt ] && /opt/myscript.sh or * * * * * test -e /tmp/signal.txt && /opt/myscript.sh You could use the -f test instead of the -e test if you want to additionally ensure that /tmp/signal.txt is a regular file and not a directory, named pipe, or some other type of file. | {
"source": [
"https://unix.stackexchange.com/questions/711595",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/249878/"
]
} |
711,610 | i am trying to write a script that will download, enable and add task with cron
and then add auto update and upgrade task to the machine. what i have until now is sudo apt install cron
sudo systemctl enable cron until here all good
then i add (after a research, the following commands) <(crontab -l) <(echo '50 19 * * * sudo apt update -y') | crontab -
<(crontab -l) <(echo '00 20 * * * sudo apt upgrade -y') | crontab - and when i check the file crontab -l i see that the script did write the task like it should,
but its not runing (i tried to run an apt install every min to see if its working) but when i write the this command 50 19 * * 3 root sudo apt update -y with nano on that file /etc/crontab it worked i tried to add root permeation on crontab -e but still not working any solution? is there is a way to add text line with script to /etc/crontab ? ( i couldn't find a way on line) thanks you all | Use an ordinary test for existence, then run the script if the test succeeds. * * * * * if [ -e /tmp/signal.txt ]; then /opt/myscript.sh; fi or * * * * * if test -e /tmp/signal.txt; then /opt/myscript.sh; fi Or, using the short-circuit syntax. Doing it this way would cause the job to fail if the file does not exist (which may trigger an email from the cron daemon): * * * * * [ -e /tmp/signal.txt ] && /opt/myscript.sh or * * * * * test -e /tmp/signal.txt && /opt/myscript.sh You could use the -f test instead of the -e test if you want to additionally ensure that /tmp/signal.txt is a regular file and not a directory, named pipe, or some other type of file. | {
"source": [
"https://unix.stackexchange.com/questions/711610",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/535113/"
]
} |
712,996 | Why are some relative file paths displayed in the form of ./file , instead of just file ? For example, when I do: find . I get this output: ./file1
./file2
./file3 What is the practical purpose, other than making the path more confusing? It's not like it is preventing me from some accident. Both are relative paths, and cat ./file1 works same as cat file1 . Is this behavior coming from find command, or is it some system-wide c library? OK, I understand why using ./file for -exec construct is necessary (to make sure I have ... | xargs rm ./-i , and not ... | xargs rm -i ). But in what situation would missing ./ break anything when using -print statement? I am trying to construct any statement that breaks something: touch -- -b -d -f -i
find -printf '%P\n' | sort
-b
-d
-f
-i Everything works fine. Just out of curiosity, how could I construct a -print statement that would demonstrate this issue? | This behaviour comes from find , and is specified by POSIX : Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The default action, -print , outputs the full pathname to standard out. find outputs the paths of files it finds starting from the path(s) given on its command line. find . asks find to look for files under . and its subdirectories, and it presents the results starting with ./ ; find foo would do the same but starting with foo , and it would produce results starting with foo/ . I don’t think find does this specifically to prevent problems with un-prefixed file names; rather, it does this for consistency — regardless of the path provided as argument, the output of -print always starts with that path. With the GNU implementation of find , you can strip the initial path off the start of the printed file by using -printf '%P\n' in place of -print . For instance with find foo/bar -name file -printf '%P\n' or find . -name file -printf '%P\n' , you'd get dir/file instead of foo/bar/dir/file or ./dir/file for those files. More generally, having ./ as a prefix can help prevent errors, e.g. if you have files with names starting with dashes; for example if you have a file named -f , rm -f won’t delete it, but rm ./-f will. When running commands with a shell or with exec*p() standard C functions (and their equivalent in other languages), when the command name doesn't contain a / , the path of command is looked in $PATH instead of being interpreted as a relative path (the file in the current working directory). Same applies for the argument to the . / source special builtins of several shells (including POSIX compliant sh implementations). Using ./cmd in that case instead of cmd , which is another way to specify the same relative path, but with a / in it is how you typically invoke a command stored in the current working directory. | {
"source": [
"https://unix.stackexchange.com/questions/712996",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/155832/"
]
} |
713,038 | I would like to replace straight single and double quotes with curly quotes ( ‘ ’ , “ ” ).
How can I do this with a shell command? | This behaviour comes from find , and is specified by POSIX : Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The default action, -print , outputs the full pathname to standard out. find outputs the paths of files it finds starting from the path(s) given on its command line. find . asks find to look for files under . and its subdirectories, and it presents the results starting with ./ ; find foo would do the same but starting with foo , and it would produce results starting with foo/ . I don’t think find does this specifically to prevent problems with un-prefixed file names; rather, it does this for consistency — regardless of the path provided as argument, the output of -print always starts with that path. With the GNU implementation of find , you can strip the initial path off the start of the printed file by using -printf '%P\n' in place of -print . For instance with find foo/bar -name file -printf '%P\n' or find . -name file -printf '%P\n' , you'd get dir/file instead of foo/bar/dir/file or ./dir/file for those files. More generally, having ./ as a prefix can help prevent errors, e.g. if you have files with names starting with dashes; for example if you have a file named -f , rm -f won’t delete it, but rm ./-f will. When running commands with a shell or with exec*p() standard C functions (and their equivalent in other languages), when the command name doesn't contain a / , the path of command is looked in $PATH instead of being interpreted as a relative path (the file in the current working directory). Same applies for the argument to the . / source special builtins of several shells (including POSIX compliant sh implementations). Using ./cmd in that case instead of cmd , which is another way to specify the same relative path, but with a / in it is how you typically invoke a command stored in the current working directory. | {
"source": [
"https://unix.stackexchange.com/questions/713038",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/513913/"
]
} |
713,108 | I used shred to wipe my external hard disk: sudo shred -vz /dev/sdb I should also add that the disk had 5 bad sectors. I want to verify the disk has been zeroed, per https://superuser.com/questions/1510233/is-there-a-faster-way-to-verify-that-a-drive-has-been-fully-zeroed I'm not that familiar with dd , but I believe that these show it's been zeroed: sudo dd if=/dev/sdb status=progress | hexdump
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
5000916670976 bytes (5.0 TB, 4.5 TiB) copied, 45754 s, 109 MB/s
9767541167+0 records in
9767541167+0 records out
5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45756.7 s, 109 MB/s
48c61b35e00 sudo dd if=/dev/sdb status=progress | od | head
5000952267264 bytes (5.0 TB, 4.5 TiB) copied, 45739 s, 109 MB/s
9767541167+0 records in
9767541167+0 records out
5000981077504 bytes (5.0 TB, 4.5 TiB) copied, 45741.1 s, 109 MB/s
0000000 000000 000000 000000 000000 000000 000000 000000 000000
*
110614154657000 But using a simple cmp shows an exception: sudo cmp /dev/zero /dev/sdb
cmp: EOF on /dev/sdb after byte 5000981077504, in line 1 Has the disk been zeroed? | Has the disk been zeroed? Yes. The output of your dd command shows that it has written 5000981077504 bytes. Your cmp command says that it's reached EOF (end of file) after 5000981077504 bytes, which is the same. Be aware that this only works well with hard drives. For solid-state devices, features such as wear leveling and overprovisioning space may result in some data not being erased. Furthermore, your drive must not have any damaged sectors, as they will not be erased. Note that cmp will not be very efficient for this task. You would be better off with badblocks : badblocks -svt 0x00 /dev/sdb From badblocks(8) , the -t option can be used to verify a pattern on the disk. If you do not specify -w (write) or -n (non-destructive write), then it will assume the pattern is already present: -t test_pattern
Specify a test pattern to be read (and written) to disk blocks.
The test_pattern may either be a numeric value between 0 and
ULONG_MAX-1 inclusive, or the word "random", which specifies
that the block should be filled with a random bit pattern. For
read/write (-w) and non-destructive (-n) modes, one or more test
patterns may be specified by specifying the -t option for each
test pattern desired. For read-only mode only a single pattern
may be specified and it may not be "random". Read-only testing
with a pattern assumes that the specified pattern has previously
been written to the disk - if not, large numbers of blocks will
fail verification. If multiple patterns are specified then all
blocks will be tested with one pattern before proceeding to the
next pattern. Also, using dd with the default block size (512) is not very efficient either. You can drastically speed it up by specifying bs=256k . This causes it to transfer data in chunks of 262,144 bytes rather than 512, which reduces the number of context switches that need to occur. Depending on the system, you can speed it up even more by using iflag=direct , which bypasses the page cache. This can improve read performance on block devices in some situations. Although you didn't ask, it should be pointed out that shred overwrites a target using three passes by default. This is unnecessary. The myth that multiple overwrites is necessary on hard disks comes from an old recommendation by Peter Gutmann. On ancient MFM and RLL hard drives, specific overwrite patterns were require to avoid theoretical data remanence issues. In order to ensure that all types of disks could be overwritten, he recommended using 35 patterns so that at least one of them would be right for your disk. On modern hard drives using modern data encoding techniques such as EPRML and NPML, there is no need to use multiple patterns. According to Gutmann himself: In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. In your position, I would recommend something along this line instead: dd if=/dev/urandom of=/dev/sdb bs=256k oflag=direct conv=fsync When it finishes, just make sure it has written enough bytes after it says "no space left on device". You can also use ATA Secure Erase which initiates firmware-level data erasure. I would not use it on its own because you would be relying on the firmware authors to have implemented the standard securely. Instead, use it in addition to the above in order to make sure dd didn't miss anything (such as bad sectors and the HPA). ATA Secure Erase can be managed by the command hdparm : hdparm --user-master u --security-set-pass yadayada /dev/sdb
hdparm --user-master u --security-erase yadayada /dev/sdb Note that this doesn't work on all devices. Your external drive may not support it. | {
"source": [
"https://unix.stackexchange.com/questions/713108",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/265166/"
]
} |
714,125 | sort -o seems superfluous. What is the point of using it when we can use sort > ? Is it sometimes impossible to use shell redirection? | Sort a file in-place: sort -o file file Using sort file >file would start by truncating the file called file to zero size, then calling sort with that empty file, resulting in an empty output file no matter what the original file's contents was. Also, in situations where commands or lists of options are automatically generated by e.g. scripts, adding -o somefile to the end of the options would override any previously set output file, which allows controlling the output file location by way of appending options. sort_opt=( some list of options )
if [ ... something ... ]; then
# We don't need to go through and delete any old use of "-o"
# because this later option would override it.
sort_opt+=( -o somefile.out )
fi
sort "${sort_opt[@]}" "$thefile" There might also be instances where the sort binary executable is called directly, without a shell to do any redirection to any file. Note that -o is a standard option whereas --output is a GNU extension. | {
"source": [
"https://unix.stackexchange.com/questions/714125",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/85900/"
]
} |
715,899 | Say I have a C program main.c that statically links to libmine.a . Statically linking to a library causes library functions to be embedded into the main executable at compile time. If libmine.a were to feature functions that weren't used by main.c , would the compiler (e.g. GCC) discard these functions? This question is inspired by the "common messaging" that using static libraries make executables larger, so I'm curious if the compiler at least strips away unused code from an archive file. | By default, linkers handle object files as a whole. In your example, the executable will end up containing the code from main.c ( main.o ), and any object files from libmine.a (which is an archive of object files) required to provide all the functions used by main.c (transitively). So the linker won’t necessarily include all of libmine.a , but the granularity it can use isn’t functions (by default), it’s object files (strictly speaking, sections). The reason for this is that when a given .c file is compiled to an object file, information from the source code is lost; in particular, the end of a function isn’t stored, only its start, and since multiple functions can be combined, it’s very difficult to determine from an object file what can actually be removed if a function is unused. It is however possible for compilers and linkers to do better than this if they have access to the extra information needed. For example, the LightspeedC programming environment on ’80s Macs could use projects as libraries, and since it had the full source code in such cases, it would only include functions that were actually needed. On more modern systems, the compiler can be told to produce object files which allow the linker to handle functions separately. With GCC, build your .o files with the -ffunction-sections -fdata-sections options enabled, and link the final program with the --gc-sections option. This does have an impact, notably by preventing certain categories of optimisation; see discard unused functions in GCC for details. Another option you can use with modern compilers and linkers is link-time optimisation; enable this with -flto . When optimisation is enabled ( e.g. -O2 when compiling the object files), the linker will not include unused functions in the resulting binary. This works even without -ffunction-sections -fdata-sections . | {
"source": [
"https://unix.stackexchange.com/questions/715899",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/287718/"
]
} |
718,042 | I'm a little confused about the missing -e option from the bash manual. man bash But it is working with a script shebang like : #!/bin/bash -e and of course it is defined in help set . Why isn't it listed in the options in the bash manual ? | It is implicitly mentioned at the start of the manual: OPTIONS All of the single-character shell options documented in the description
of the set builtin command, including -o , can be used as options when
the shell is invoked. [...] You are then expected to look up the set builtin command further down in the manual, use help set in an interactive shell session (as you mention in the question), or access the longer reference manual in some appropriate way (e.g. by using the info bash set command, on systems where this works). | {
"source": [
"https://unix.stackexchange.com/questions/718042",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/119603/"
]
} |
718,052 | This is the current state of my disk at present , how can I use the unused space or move stuff to free up space , without formatting or loosing data Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.6G 0 7.6G 0% /dev
tmpfs 7.7G 121M 7.5G 2% /dev/shm
tmpfs 7.7G 2.0M 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/mapper/Root 50G 50G 336M 100% /
/dev/nvme0n1p2 3.0G 467M 2.6G 16% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/Home 100G 30G 71G 30% /home
tmpfs 1.6G 52K 1.6G 1% /run/user/119637 Output of lvmdiskscan /dev/mapper/luks-e66c5c74-2af5-4500-9e5f-011c23ab17aa [ 235.26 GiB] LVM physical volume
/dev/nvme0n1p1 [ 200.00 MiB]
/dev/nvme0n1p2 [ 3.00 GiB]
/dev/nvme0n1p3 [ <235.28 GiB]
0 disks
3 partitions
1 LVM physical volume whole disk
0 LVM physical volumes Can I merge home partition or add a partition for root from home as it has more space ?
Are below steps logical ? Make another partition of home Merge that to root ( No idea on commands how to do) Say if I provide 10G from home to root it will resolve storage issue for my machine and all data will be intact . As workaround for now , just moved the most heavy files # find . -type f -size +1G
./VirtualBox VMs/origin-1.3.0/box-disk1.vmdk
./VirtualBox VMs/virtualBox-related_default_1654693896122_36201/centos-7-1-1.x86_64.vmdk
./.vagrant.d/boxes/thesteve0-VAGRANTSLASH-openshift-origin/1.2.0/virtualbox/box-disk1.vmdk
# mv "./VirtualBox VMs/origin-1.3.0/box-disk1.vmdk" /home/
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 7.6G 0 7.6G 0% /dev
tmpfs 7.7G 155M 7.5G 2% /dev/shm
tmpfs 7.7G 2.0M 7.7G 1% /run
tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup
/dev/mapper/Root 50G 40G 11G 79% /
/dev/nvme0n1p2 3.0G 467M 2.6G 16% /boot
/dev/nvme0n1p1 200M 17M 184M 9% /boot/efi
/dev/mapper/Home 100G 40G 61G 40% /home
tmpfs 1.6G 48K 1.6G 1% /run/user/119637 Not sure how this will impact using virtualbox :) | It is implicitly mentioned at the start of the manual: OPTIONS All of the single-character shell options documented in the description
of the set builtin command, including -o , can be used as options when
the shell is invoked. [...] You are then expected to look up the set builtin command further down in the manual, use help set in an interactive shell session (as you mention in the question), or access the longer reference manual in some appropriate way (e.g. by using the info bash set command, on systems where this works). | {
"source": [
"https://unix.stackexchange.com/questions/718052",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/388380/"
]
} |
719,802 | I can't create folders or files named 'com1', 'com2', ..., 'com9' in my extended hard drive. I'm trying to create a Wine prefix on my other drive where my games are stored, but I get some errors. Here is a pastebin of the whole output when I run winecfg to a new prefix. https://pastebin.com/SsaAFGdw I believe it's not a permission issue since I can make directories and files. And, I also tried creating a prefix from my main boot drive, then move it to my extended hard drive, then I get errors when it's now trying to copy files named 'com1', 'com2', ..., 'com9' . This is how my extended drive partitioned: sudo WINEPREFIX='path' winecfg also does not work, same result. EDIT:
OS: Manjaro KDE Plasma Output from mount | grep /dev/sdb : /dev/sdb2 on /run/media/snich/Extended type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
/dev/sdb4 on /run/media/snich/Games type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
/dev/sdb3 on /run/media/snich/Personal type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2) | Assuming ntfs-3g is used, windows_names is probably set somewhere as an option. Seen man page OPTIONS windows_names This option prevents files, directories and extended attributes to be created with a name not allowed by windows, because it contains some not allowed character, or the last character is a space or a dot, or the name is reserved. The forbidden characters
are the nine characters " * / : < > ? \ | and those whose code is less
than 0x20, and the reserved names are CON, PRN, AUX, NUL, COM1..COM9,
LPT1..LPT9, with no suffix or followed by a dot. Existing such files can still be read (and renamed). Edited response : I'm currently with debian/Buster and there is a /etc/udisks2/udisks2.conf file containing : ### For the reference, these are the builtin mount options:
# [defaults]
[...]
# ntfs_defaults=uid=$UID,gid=$GID,windows_names
# ntfs_allow=uid=$UID,gid=$GID,umask,dmask,fmask,locale,norecover,ignore_case,windows_names,compression,nocompression,big_writes So, for debian, and probably most of their derivatives, mounting an NTFS implies using option windows_names . As explained in the same file (a little bit higher), you could try putting your options in a /etc/udisks2/mount_options.conf file. Just edit/create the file, copy those two lines, remove leading hash and remove option windows_names . Do everything as root, and take care of permissions. Unmount and re-mount. (Now, I'm not sure all this is a good advise : as Wine will act "à la" MS-Windows, this will not end to be a good thing.) This is just a feeling, not fact, and many others did proved it doesn't hurt. Enjoy ! | {
"source": [
"https://unix.stackexchange.com/questions/719802",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/543933/"
]
} |
723,368 | I have a command <streaming ls> | wc -l , it works fine, but the <streaming ls> takes a while, which means I don't get the final line count until a few minutes later. Is there a way to have the output of wc -l update in real time? | You can’t use wc -l for this, but you can produce a running count of lines seen using other tools, for example AWK: <streaming ls> | awk '{ printf "%d\r", NR } END { print NR }' This will update the count of lines seen every time a line is seen, and finish with the total number of lines at the end of the process. For commands producing lots of output, the overhead can be reduced by printing every n lines: … | awk 'NR % 10 == 0 { printf "%d\r", NR } END { print NR }' (for n = 10) or by printing every second: … | awk 'systime() > lasttime { lasttime = systime(); printf "%d\r", NR } END { print NR }' (or every n seconds by changing the condition to >= lasttime + n ). | {
"source": [
"https://unix.stackexchange.com/questions/723368",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/321024/"
]
} |
723,387 | Can anyone tell me the most efficient way to convert a variable to a repeating -param in a script call? I don't know how to describe it properly, but the examples speak for themselves. At least I hope they do :) Example1: # input
export DOMAINS="domain1.tld,domain2.tld"
# tranform to
./example-script.sh -d "domain1.tld" -d "domain2.tld" Example2 (input is singular): # input
export DOMAINS="domain1.tld"
# tranform to
./example-script.sh -d "domain1.tld" [UPDATE]
I'm sorry for not adding this in the first post. Context I should have added: DOMAINS is env variable added to a Docker container Container only has sh shell, so zsh and bash specific options won't work. Sorry for adding the initial bash tag. | You can’t use wc -l for this, but you can produce a running count of lines seen using other tools, for example AWK: <streaming ls> | awk '{ printf "%d\r", NR } END { print NR }' This will update the count of lines seen every time a line is seen, and finish with the total number of lines at the end of the process. For commands producing lots of output, the overhead can be reduced by printing every n lines: … | awk 'NR % 10 == 0 { printf "%d\r", NR } END { print NR }' (for n = 10) or by printing every second: … | awk 'systime() > lasttime { lasttime = systime(); printf "%d\r", NR } END { print NR }' (or every n seconds by changing the condition to >= lasttime + n ). | {
"source": [
"https://unix.stackexchange.com/questions/723387",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/547598/"
]
} |
725,590 | I’m asking because string comparisons are slow, but indexing is fast, and a lot of scripts I write are in bash, which to my knowledge performs a full string lookup for every executable call. All those ls ’s and grep ’s would be a little bit faster without performing a string lookup on each step. Of course, this now delves into compiler optimization. Anyways, is there a way to directly invoke a program in Linux using only its inode number (assuming you only had to look it up once for all invocations)? | The short answer is no. The longer answer is that linux user API doesn't support accessing files by any method using the inode number. The only access to the inode number is typically through the stat() system call which exposes the inode number, which can be useful for identifying if two filenames are the same file, but is not used for anything else. Accessing a file by inode would be a security violation, as it would bypass permissions on the directories that contain the file linked to the inode. The closest you can get to this would be accessing a file by open file handle. But you can't run a program from that either, and this would still require opening the file by a path. (As noted in comments, this functionality was added to linux for security reasons along with the rest of the *at system calls, but is not portable.) There's also numerous ways of using the inode number to find the file (basically, crawl the filesystem and use stat) and then run it normally, but this is the opposite of what you want, as it is enormously more expensive than just accessing the file by pathname and doesn't remove that cost either. Having said that, worrying about this type of optimization is probably moot, as Linux has already optimized the internal inode lookup a great deal. Also, traditionally, shells hash the path location of executables so they don't have to hunt for them from all directories in $PATH every time. | {
"source": [
"https://unix.stackexchange.com/questions/725590",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/441393/"
]
} |
726,261 | Suppose we have an if statement as follows: if [ $(false) ]; then
echo "?"
fi Then "?" is not printed (the condition is false). However, in the following case, "?!" is printed, why? if [ $(false) -o $(false) ]; then
echo "?!"
fi | $(false) doesn’t evaluate to false, it produces an empty string. Because it isn’t quoted, if [ $(false) ]; then evaluates to if [ ]; then which is false, because [ with an empty expression is false. if [ $(false) -o $(false) ]; then evaluates to if [ -o ]; then This doesn’t use the -o operator, it evaluates -o as an expression with a single string; [ with such an expression is true, so the then part of the if statement runs. See the POSIX specification for test , in particular: The algorithm for determining the precedence of the operators and the return value that shall be generated is based on the number of arguments presented to test. (However, when using the "[...]" form, the <right-square-bracket> final argument shall not be counted in this algorithm.) In the following list, $1, $2, $3, and $4 represent the arguments presented to test: 0 arguments: Exit false (1). 1 argument: Exit true (0) if $1 is not null; otherwise, exit false. test only considers operators if it is given at least two arguments. If you want to use a command’s exit status as a condition, don’t put it either in a command substitution or in [ ] : if false; then and if false || false; then Note too that test ’s -a and -o operators are deprecated and unreliable ; you should use the shell’s && and || operators instead, e.g. if [ "$a" = b ] || [ "$a" = c ]; then | {
"source": [
"https://unix.stackexchange.com/questions/726261",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/5228/"
]
} |
729,344 | I'm able to run this command successfully: tail -f my_file.txt | grep foo It shows only the lines with the string foo , and it keeps showing them. But when I run this command: tail -f my_file.txt | grep foo | grep bar It doesn't show any lines, even though there are lines that include both foo and bar . I know there is a solution for using multiple patterns in a single grep call, but I want to know why this line failed. | That's because the default behaviour of the C runtime library is to buffer writes to stdout until a full block of data is written (some kilobytes, usually), unless stdout is connected to a terminal. You'll get output once the middle grep has printed a full block, but then you have to wait again for the next block to fill, and so on. It's an optimization for throughput, and works much better when the left-hand command just does some task and terminates, instead of waiting for something. GNU grep has the --line-buffered option to turn off that buffering, so this should work better: tail -f my_file.txt | grep --line-buffered foo | grep bar The last grep prints to the terminal so it's line buffered by default and doesn't need an option. See Turn off buffering in pipe for generic solutions to the buffering issue. In this particular case of two greps, you could use e.g. a single AWK instead as Stéphane Chazelas mentioned in a comment: tail -f my_file.txt | awk '/foo/ && /bar/' (Incidentally, you could also do things like awk '/foo/ && !/bar/' , catching lines with foo but no bar .) Doing the same in grep would be harder, as grep -e foo -e bar matches any lines that contain either foo or bar . You'd need something like ... | grep -E -e 'foo.*bar|bar.*foo' instead. | {
"source": [
"https://unix.stackexchange.com/questions/729344",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/11086/"
]
} |
730,919 | There is already the Nimbus ExaDrive 100TB SSD and the 200TB SSD will come soon . As you can read here ext4 supports up to 256 TB. It's only a matter of time hardware will reach this limit. Will they update ext4 or will there be ext5? What will happen? | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"source": [
"https://unix.stackexchange.com/questions/730919",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/376049/"
]
} |
730,947 | uptime piped to sed fixes the first part but what about memory usage? top runs, at least by default, interactively. So: I need to watch used RAM excluding opportunistic caching(which gets dropped as soon as the memory is needed). And ask about both because I expect a single standard tool can do it, instead of two. Or - even better - something in /proc indicating the RAM part. | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"source": [
"https://unix.stackexchange.com/questions/730947",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/20506/"
]
} |
730,957 | I try to use a public key to connect to a remote server running centos7. I generated a key by ssh-keygen then copy the key to the server by ssh-copy-id [email protected] the authorized_keys is created on the remote machine, but the ssh login still requires the password. I try to login with triple verbose option ssh -v [email protected] and it give me something like: OpenSSH_7.6p1 Ubuntu-4ubuntu0.7, OpenSSL 1.0.2n 7 Dec 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to chip02.phy.ncu.edu.tw [140.115.32.12] port 22.
debug1: Connection established.
debug1: identity file /home/longhoa/.ssh/id_rsa type 0
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /home/longhoa/.ssh/id_ed25519-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.7
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug1: Authenticating to chip02.phy.ncu.edu.tw:22 as 'hoa'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:ALKc8EF9HMXaCSs/aN4wsfpFN8Bh1W9twUxOTueP5Kk
debug1: Host 'chip02.phy.ncu.edu.tw' is known and matches the ECDSA host key.
debug1: Found key in /home/longhoa/.ssh/known_hosts:1
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Next authentication method: gssapi-keyex
debug1: No valid Key exchange context
debug1: Next authentication method: gssapi-with-mic
debug1: Unspecified GSS failure. Minor code may provide more information
No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000)
debug1: Unspecified GSS failure. Minor code may provide more information
No Kerberos credentials available (default cache: FILE:/tmp/krb5cc_1000)
debug1: Next authentication method: publickey
debug1: Offering public key: RSA SHA256:S79m96anBkvF16Rjihe80MYbcU1fZlfPxE5686k/vn4 /home/longhoa/.ssh/id_rsa
debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password
debug1: Trying private key: /home/longhoa/.ssh/id_dsa
debug1: Trying private key: /home/longhoa/.ssh/id_ecdsa
debug1: Trying private key: /home/longhoa/.ssh/id_ed25519
debug1: Next authentication method: password
[email protected] password:
debug1: Authentication succeeded (password) I searched on google, some mentioned setting the correct permission, I followed the instruction and ended up with
the key on my computer: -rw------- 1 longhoa longhoa 1.7K 23-01-08|14:14:40 id_rsa
-rw-r--r-- 1 longhoa longhoa 399 23-01-08|14:14:40 id_rsa.pub the permission on the remote server: drwx------. 2 hoa zh 4.0K 23-01-08|15:10 /home/hoa/.ssh
-rw-------. 1 hoa zh 399 23-01-08|14:23 /home/hoa/.ssh/authorized_keys
dr-xr-xr-x. 29 root root 4096 22-12-27|17:26 /
drwxrwxrwx. 41 root root 4096 22-11-24|18:38 /home
drwx------. 58 hoa zh 12288 23-01-11|00:47 /home/hoa/ There are other answer mention SELinux and debuging from the server but I don't have root access to that server, so I can't not do anything. So how do I make it work? Thank you very much. Update 1 I tried @roaima suggestion. ssh -nvv -o NumberOfPasswordPrompts=0 [email protected] 2>&1 | grep "debug2: host key" which returns: debug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa
debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 I also tried id_dsa and id_ed25519, but none seems to work. Update2 @roaima and @telcoM pointed out that the remote host was not set up correctly. I will update the status after I talk to the admin. | 64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks , no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 1024 8 bytes. There are practical limits around 1 PiB and 1 EiB , but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5. | {
"source": [
"https://unix.stackexchange.com/questions/730957",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/555836/"
]
} |
731,072 | I am facing a very confusing behaviour of ls that I cannot search for. It displays that there are contents in a directory, but only when I'm in the directory where these were created from. Let me show you: ciprian Documents $ pwd
/Users/ciprian/Documents
ciprian Documents $ ls ../Downloads/rss22/
22rss-USB/
ciprian Documents $ ls ../Downloads/rss22/22rss-USB/
HTML/
ciprian Documents $ cd ../Downloads/rss22
ciprian rss22 $ ls
ciprian rss22 $ ls 22rss-USB/
gls: cannot access '22rss-USB/': No such file or directory After I cd ed to ../Downloads/rss22 , its contents are displayed as empty. It is also shown empty if I cd ~/Desktop and then I ls ../Downloads/rss22/ , like the first case here. To me, this indicates that there might be a folder named ../Downloads/rss22 inside Documents . But I cannot figure out how to display it. ls -a ~/Documents does not show anything related to these folders. What is going on? The files were created by trying a partial extraction from an archive: unzip 22rss-USB.zip "22rss-USB/HTML/**/*" -d ../Downloads/rss22/ For reference, I am on macOS , though I do not think this is relevant (it's a Unix, right?). I am using Bash 5.1.16 (changed from default zsh). Output of type pwd : pwd is a shell builtin. It turns out that if I do cd -P ../Downloads/igarss22/ then it shows the contents that I expect. Where can I see more about this? man cd doesn't show anything about -P . Now, after cd -P ../Downloads/igarss22/ ciprian Documents $ cd -P ../Downloads/igarss22
ciprian igarss22 $ pwd
/Users/ciprian/Library/CloudStorage/OneDrive/Downloads/igarss22 Right. So I forgot this ; my ~/Documents is a symlink to a folder under my OneDrive: $ ll ~/ | grep Doc
lrwxr-xr-x 1 ciprian 38 May 19 2022 Documents -> /Users/ciprian/OneDrive/Documents Which, due to some magic and changes in macOS v12 (Monterey), actually lives under /Users/ciprian/Library/CloudStorage/OneDrive . I'm still not sure what exactly is going on | The difference is physical vs. logical treatment of .. : You apparently have two separate Downloads directories: one is /Users/ciprian/Downloads the other is /Users/ciprian/Library/CloudStorage/OneDrive/Downloads Some shells, including bash and all POSIX shells, have the option of treating .. in the cd command as "take the current path exactly as the user expressed it, cut off the right-most element and change to the resulting path". Across symbolic links this will work as a logical "go back to where we came from", returning from /Users/ciprian/Library/CloudStorage/OneDrive/Documents (symlinked to, and referred by user as ~/Documents ) to /Users/ciprian (so the new path at this point is just ~ ) and then from there to /Users/ciprian/Downloads . If you use cd -L , this is explicitly the behavior you'll get. Other programs typically won't do this: they will instead follow the physical paths, so for them, changing directories to ../Downloads starting from /Users/ciprian/Library/CloudStorage/OneDrive/Documents will always mean changing to /Users/ciprian/Library/CloudStorage/OneDrive/Downloads . This is also what you'll get with cd -P . If you don't like this (mis)feature of the shell, there is probably a shell-specific way to disable it. With the bash shell, adding set -P or set -o physical to your ~/.bashrc or similar shell start-up script would make bash behave like all cd commands had the -P option unless the -L option is explicitly used. Note . The documentation of -P is in man bash (section SHELL BUILTIN COMMANDS) because cd is an internal command of Bash. | {
"source": [
"https://unix.stackexchange.com/questions/731072",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/68186/"
]
} |
733,013 | I am confused by the terminology used to describe Linux signal delivery. Most texts say things like "the signal is delivered to the process" or "the signal is delivered to the thread". It is my understanding that a signal is "delivered" to a signal handler, which resides in a process, when the kernel calls that handler. The process itself is running asynchronously, and this "delivery" process is akin to a CPU calling an interrupt handler. The interrupt handler (signal handler) is not the process thread, nor any thread running under that process, correct? It is a separate thread of its own started by the kernel. So the signal is not delivered to a thread or a process, but is delivered to a signal handler residing in the process and not necessarily associated with any specific thread. If this is not correct, please tell me, for example, the association between the signal handler and a pthread that justifies the terminology of "signal delivered to a pthread". | A signal handler is just a function within a given process' address space. This function is executed whenever the signal is received. There's nothing special about it (although there are certain actions that should not be performed within a signal handler), and it does not reside in a special thread. While signals are often described as being software interrupts, they aren't actually asynchronous. * When a signal is sent to a process, the kernel adds it to the process' pending signal set. It doesn't cause anything to happen immediately. The signal will only actually do anything at the next context switch back to userspace (whether that's a syscall returning or the scheduler switching to that process). If a process were to, for whatever reason, never switch from kernel to user, the signal would be kept in the pending signal set and never acted upon. † When a process establishes a signal handler, it gives the kernel an address to a function. When the process is to receive a signal, the next context switch from kernelspace to userspace will not restore the execution context from before the process entered the kernel (usually, the context is saved when entering the kernel and restored upon exiting it). Instead, it will "restore" execution at the location of the signal handler. When the signal handler returns, it executes code which calls rt_sigreturn() , which restores the real execution context, allowing the process to continue where it left off. When a process has multiple threads (i.e. there are multiple processes in a given thread group), the signal is sent to one of the threads in the thread group at random. This is because threads typically share memory and many other resources and run the same code. * While they aren't asynchronous from the perspective of hardware, they are effectively asynchronous as far as userspace applications are concerned. This is why they are sometimes called software interrupts. † When I refer to context switches, I mean privilege or process switches (i.e. both simple mode transitions between kernel and user within the same process and "true" context switches between processes or kernel threads). | {
"source": [
"https://unix.stackexchange.com/questions/733013",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/558025/"
]
} |
733,021 | I am getting confused about setting local variables in bash functions. It seems that using local dgt
local ltr
local braces
local da could be safer than using local dgt ltr braces da I am worried about the possibility of a variable not getting defined as local, or not having the value set. Could that happen? For instance, consider local foo="$(mycmd)" The exit status of the command is overridden by the exit status of the creation of the local variable. Then the correct code would be local foo
foo=$(mycmd) | A signal handler is just a function within a given process' address space. This function is executed whenever the signal is received. There's nothing special about it (although there are certain actions that should not be performed within a signal handler), and it does not reside in a special thread. While signals are often described as being software interrupts, they aren't actually asynchronous. * When a signal is sent to a process, the kernel adds it to the process' pending signal set. It doesn't cause anything to happen immediately. The signal will only actually do anything at the next context switch back to userspace (whether that's a syscall returning or the scheduler switching to that process). If a process were to, for whatever reason, never switch from kernel to user, the signal would be kept in the pending signal set and never acted upon. † When a process establishes a signal handler, it gives the kernel an address to a function. When the process is to receive a signal, the next context switch from kernelspace to userspace will not restore the execution context from before the process entered the kernel (usually, the context is saved when entering the kernel and restored upon exiting it). Instead, it will "restore" execution at the location of the signal handler. When the signal handler returns, it executes code which calls rt_sigreturn() , which restores the real execution context, allowing the process to continue where it left off. When a process has multiple threads (i.e. there are multiple processes in a given thread group), the signal is sent to one of the threads in the thread group at random. This is because threads typically share memory and many other resources and run the same code. * While they aren't asynchronous from the perspective of hardware, they are effectively asynchronous as far as userspace applications are concerned. This is why they are sometimes called software interrupts. † When I refer to context switches, I mean privilege or process switches (i.e. both simple mode transitions between kernel and user within the same process and "true" context switches between processes or kernel threads). | {
"source": [
"https://unix.stackexchange.com/questions/733021",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/540882/"
]
} |
734,543 | It's well known that redirecting standard output and error to the same file with cmd >out_err.txt 2>out_err.txt can lead to loss of data, as per the example below: work:/tmp$ touch file.txt
work:/tmp$ ls another_file.txt
ls: cannot access 'another_file.txt': No such file or directory The above is the setup code for the example. An empty file file.txt exists and another_file.txt is not a thing. In the code below, I naively redirect to out_err.txt both input and output os listing these files. work:/tmp$ ls file.txt another_file.txt >out_err.txt 2>out_err.txt
work:/tmp$ cat out_err.txt
file.txt
t access 'another_file.txt': No such file or directory And we see that we lost a few characters in the error stream. However, using >> works in the sense that replicating the example would yield keep the whole output and the whole error. Why and how does cmd >>out_err.txt 2>>out_err.txt work? | Not sure it's that well known, but it happens because done like that, the two file handles are completely separate, and have independent read/write positions. Hence they can overwrite each other. (They correspond to two distinct open file descriptions , to use the technical term, which is sadly somewhat easy to confuse with the term "file descriptor".) This only happens with foo > out.txt 2>out.txt , not with foo > out.txt 2>&1 , since the latter copies the file descriptor (referring to the same open file description). When appending, all writes go the to end of the file, as it is during the moment of the write. This is handled by the OS, atomically, so that there's no way for even another process to get in the middle. Hence, the issue from independent read/write positions is defused.
(Except it might not work over NFS, that's a filesystem restriction.) In your example, the error message ls: cannot access... is written first, at the start of the file. The write position of the stderr fd is now at the end of the file. Then the regular output of file.txt<newline> is also written, but the write position of the stdout fd is still at the start, so those 9 bytes overwrite part of the error message. With an appending fd, that second write would go to end, regardless of anything. | {
"source": [
"https://unix.stackexchange.com/questions/734543",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/499446/"
]
} |
734,931 | Syscalls (system calls) cause some performance penalty due to the isolation between kernel and user space. Therefore, it sounds like a good idea to reduce syscalls. So what I thought is, that we could pack together syscalls into a single one. So, the idea is to place the syscalls and arguments
in a simple data structure in memory. Then we could introduce a new syscall, which we give this data structure. The kernel could then trigger all the functionality in parallel and resume the thread if one (or all) syscalls finished. I think this approach would be a good basis for concurrent programming (asynchronous I/O) and would improve on existing select/poll/epoll solutions by allowing concurrency on any syscall and reducing overall context switches. Why is this not done? | This already exists. On Linux it’s implemented by io_uring , available since version 5.1 of the kernel (May 2019): operations are placed on a queue (or rather, ring) and processed without system calls, with their results going to another queue. | {
"source": [
"https://unix.stackexchange.com/questions/734931",
"https://unix.stackexchange.com",
"https://unix.stackexchange.com/users/358092/"
]
} |
Subsets and Splits