ID
int64
1
1.07k
Comment
stringlengths
8
1.13k
Code
stringlengths
10
4.28k
Label
stringclasses
4 values
Source
stringlengths
21
21
File
stringlengths
4
82
101
Network analysis using EBICglasso with the Germanspeaking sample including the openmindedness scale
gr2<- list('Deliberate MW'=c(1:4), 'Spontaneous MW'=c(5:8), 'Boredom'=c(9:16), 'Open-Mindedness'=c(17:22)) names2<-c("I allow my thoughts to wander on purpose", "I enjoy mind-wandering", "I find mind-wandering is a good way to cope with boredom", "I allow myself to get absorbed in pleasant fantasy", "I find my thoughts wandering spontaneously", "When I mind-wander my thoughts tend to be pulled from topic to topic", "It feels like I don’t have control over when my mind wanders", "I mind wander even when I’m supposed to be doing something else", "I often find myself at “loose ends,” not knowing what to do", "I find it hard to entertain myself", "Many things I have to do are repetitive and monotonous", "It takes more stimulation to get me going than most people", "I don’t feel motivated by most things that I do", "In most situations, it is hard for me to find something to do or see to keep me interested", "Much of the time, I just sit around doing nothing", "Unless I am doing something exciting, even dangerous, I feel half-dead and dull", "Has few artistic interests", "Is complex, a deep thinker", "Is original, comes up with new ideas", "Is fascinated by art, music, or literature", "Has little interest in abstract ideas", "Has little creativity") n2<-estimateNetwork(DataGerman, default= "EBICglasso") plot(n2, groups = gr2, nodeNames = names2, legend.cex=.35) centrality_auto(n2, weighted = TRUE, signed = TRUE) centralityPlot(n2, include =c("Betweenness","Closeness", "Strength")) print(n2)
Statistical Modeling
https://osf.io/tg3fq/
syntax_SDMWS&SBPS.R
102
Extract the vectors of analysed characteristics of Partner and Nonbiological father from the dataset.
text<-paste("trait_p<-data$Partner_",char,sep="") eval(parse(text=text)) text<-paste("trait_f<-data$Nonbiol_",char,sep="") eval(parse(text=text)) text<-paste("trait_pb<-dbiol$Partner_",char,sep="") eval(parse(text=text)) text<-paste("trait_b<-dbiol$Biol_",char,sep="") eval(parse(text=text))
Data Variable
https://osf.io/greqt/
functions2.R
103
Cohen's d First create the equivalent arrangement for the stimation of differences if the father is biologicla and present
muB<-rep(mean(diffb,na.rm=T),15) sdB<-rep(sd(diffb,na.rm=T),15) nB<-rep(length(diffb[!is.na(diffb)]),15)
Statistical Test
https://osf.io/greqt/
functions2.R
104
plots also when there is no interaction and also squared terms (potentially involved in interactions) create data frame storing the observed mean values per combination of the values of the grided covariate and factor levels in the data create vectors denoting the grid cell center values for the two covariate:
if(!is.na(grid.resol)){ xvar=seq(min(plot.data[,covariate]), max(plot.data[,covariate]), length.out=grid.resol) bin.x=cut(x=plot.data[,covariate], breaks=xvar, include.lowest=T, labels=F) bin.x=min(xvar)+diff(xvar[1:2])/2+(bin.x-1)*diff(xvar[1:2]) }else{ bin.x=plot.data[, covariate] }
Visualization
https://osf.io/vjeb3/
draw.2.w.int.bw.1.cov.and.1.fac.r
105
remove rows where wkl_uuid is NA:
VS <- VS[!is.na(VS$wkl_uuid), ]
Data Variable
https://osf.io/w7pjy/
format_captWKLquality.R
106
Generate data assuming an underlying VAR(1) process Function to generate data assuming an underlying VAR(1) process the input is N the number of samples, T.days and T.beeps are the number of days and the number of beeps ESM is conducted (Total number of assessments is T.days x T.beeps) Psi is a matrix with the fixed regression weights, mu is a vector with the fixed intercepts, var.Psi is variance of the random regression weights (i.e., assumed to be equal), For each participant, it is checked whether the matrix Psi conforms the assumption of stationary time series with the absolute value of the maximum eigenvalue smaller than 1 The distribution of the random effects resembles a truncated multivariate normal distribution. This function simulates data from a VAR(1) model
Data.VAR.Fixed = function(N,T,Psi,cor.Sigma){ p = ncol(Psi)
Statistical Modeling
https://osf.io/rs6un/
Data.VAR.Fixed.R
107
remove layers with gtype_class "MF" (only three distinct MF layers in data set!)
toDiscard <- which(CM$gtype_class == "MF") if (length(toDiscard) > 0) CM <- CM[-toDiscard, ]
Data Variable
https://osf.io/w7pjy/
format_confusionMatrix.R
108
Testing participant heterogeneity chisquare test:
testHetChi(freq = "data/data_retrieval.csv", tree = c("E","E","E", "U","U","U", "N","N","N") ) testHetChi(freq = "data/data_encoding.csv", tree = c("E","E","E", "U","U","U", "N","N","N") )
Statistical Test
https://osf.io/s82bw/
01_TreeBUGS_with_csv_files.R
109
Fitting a betaMPT model
m.retrieval.beta <- betaMPT(eqnfile="model/2htsm.eqn", data = "data/data_retrieval.csv", restrictions = "model/restrictions.txt", modelfilename = "results/2htsm_betaMPT.jags", transformedParameters = list("deltaDd=D1-d1"), parEstFile = "results/results_retrieval_betaMPT.txt", n.chain = 4, n.iter = 50000, n.adapt = 10000, n.burnin = 10000, n.thin = 10, ppp = 5000, dic = TRUE ) summary(m.retrieval.beta)
Statistical Modeling
https://osf.io/s82bw/
01_TreeBUGS_with_csv_files.R
110
Withinsubject tests Example: twohigh threshold model (2HTM, included in TreeBUGS)
htm <- system.file("MPTmodels/2htm.eqn", package="TreeBUGS")
Statistical Modeling
https://osf.io/s82bw/
01_TreeBUGS_with_csv_files.R
111
Plot ranked correlations
cor_dat <- corr_cross(ogdatasub, max_pvalue = 0.05, # display only significant correlations (at 5% level) pvalue =T, plot = F, top = 10 # display top 10 correlations (by correlation coefficient) ) |> arrange(abs(corr)) |> mutate(lab = paste(key, mix, sep = " + "), lab = factor(lab)) cor_dat |> ggplot(aes(y = lab, x = abs(corr), fill = as.factor(sign(corr)))) + geom_col() + scale_fill_manual(values = c("#E64B3D", "#0092B2")) + labs(title="", subtitle = "", caption="Note: Red bars indicate a negative relationship") + geom_text(aes(label = signif(.data$corr, 2)), colour = "#FFFFFF", size = 3.5, hjust = 1.1) + scale_x_continuous(limits=c(0, .4), position="top") + scale_y_discrete(labels=c("fst + cst" = "Difficult Financial Situation & Cost of Food", "rsp + spp" = "Loved Ones' Needs & Lack of Support", "pdd + ppd" = "Positive Experiences with Veg*nism & Positive Elements Post-Diet", "eft + prc" = "Meal Complexity & Need for Specialized Equipment or Planning", "fds + oth" = "Discontent and Cravings for Animal Products & Other Obstacles", "rsp + eft" = "Loved Ones' Needs & Meal Complexity", "fph + cst" = "Challenging Purchase Requirements & Cost of Food", "avl + fph" = "Limited Access to Veg*n Options & Challenging Purchase Requirements", "med + con" = "Professional Medical Advice & Difficulty Managing Health", "con + com" = "Difficulty Managing Health & Commitment Difficulties", "fcn + prc" = "Lack of Veg*n Knowledge & Need for Specialized Equipment or Planning")) + theme_minimal() + theme(axis.title = element_blank(), legend.position = "none") cor_dat ggsave("Correlations.png", width=8, height=6)
Visualization
https://osf.io/q2zrp/
Graphs.R
112
Predictive performance plot ( model size selection plot)
stat_pretty <- setNames(nm = proj_evalstats) stat_pretty <- toupper(stat_pretty) stopifnot(identical(proj_evalstats, "mlpd")) ggeval <- plot(C_cvvs, stats = proj_evalstats, deltas = TRUE, ranking_nterms_max = NA) ggeval <- ggeval + facet_null() ggeval <- ggeval + scale_y_continuous( sec.axis = sec_axis(~ exp(.), name = bquote(Lambda*" "*"GMPD")) ) ggeval <- ggeval + labs(y = bquote(Delta*" "*.(stat_pretty))) print(ggeval) ggsave(file.path("output", out_folder, paste0(plot_prefix, "projpred_search_deltas.jpeg")), width = 7, height = 7 * 0.618) saveRDS(last_plot(), file = file.path("output", out_folder, paste0(plot_prefix, "projpred_search_deltas.rds")))
Visualization
https://osf.io/emwgp/
projpred.R
113
save attribute level for option i in trial row.id
wide.data[[paste0("opt",i,att.names[k])]][row.id]=feat.ij
Data Variable
https://osf.io/tbczv/
01-readData.r
114
save position of attribute j
wide.data[[paste0("pos",att.names[k])]][row.id]=j
Data Variable
https://osf.io/tbczv/
01-readData.r
115
save attribute of position k
wide.data[[paste0("pos",j)]][row.id]=att.names[k] } } } if(y$selected[i]==1) wide.data$response[row.id]=i }
Data Variable
https://osf.io/tbczv/
01-readData.r
116
1. recode RT for trials with slow (missed) response as NA
data$rt[data$trialError == " Slow"] <- NA
Data Variable
https://osf.io/tbczv/
01-readData.r
117
ttests to compare composite 'animal protection behaviors' in people who have vs. have not experienced each (16 types total)
t.test(beh_animalprotect_comp ~ graphic_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp~ nongraphic_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ person_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ leaflet_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ news_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ social_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ humaneed_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ documentary_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ book_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ celebrity_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ ad_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ challenge_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ labels_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp~ labeled_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ ndisprotest_exp_buc, data=data, var.equal = TRUE) t.test(beh_animalprotect_comp ~ disprotest_exp_buc, data=data, var.equal = TRUE)
Statistical Test
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
118
FDR correction for ttests that compared composite 'animal protection behaviors' between people who have and have not experienced each advocacy type
adjusted_pvalues_protectionbehaviors <- data %>% summarise(graphic_exp_buc = t.test(beh_animalprotect_comp ~ graphic_exp_buc, var.equal = TRUE)$p.value, nongraphic_exp_buc = t.test(beh_animalprotect_comp ~ nongraphic_exp_buc, var.equal = TRUE)$p.value, person_exp_buc = t.test(beh_animalprotect_comp ~ person_exp_buc, var.equal = TRUE)$p.value, leaflet_exp_buc = t.test(beh_animalprotect_comp ~ leaflet_exp_buc, var.equal = TRUE)$p.value, news_exp_buc = t.test(beh_animalprotect_comp ~ news_exp_buc, var.equal = TRUE)$p.value, social_exp_buc = t.test(beh_animalprotect_comp ~ social_exp_buc, var.equal = TRUE)$p.value, humaneed_exp_buc = t.test(beh_animalprotect_comp ~ humaneed_exp_buc, var.equal = TRUE)$p.value, documentary_exp_buc = t.test(beh_animalprotect_comp ~ documentary_exp_buc, var.equal = TRUE)$p.value, book_exp_buc = t.test(beh_animalprotect_comp ~ book_exp_buc, var.equal = TRUE)$p.value, celebrity_exp_buc = t.test(beh_animalprotect_comp ~ celebrity_exp_buc, var.equal = TRUE)$p.value, ad_exp_buc = t.test(beh_animalprotect_comp ~ ad_exp_buc, var.equal = TRUE)$p.value, challenge_exp_buc = t.test(beh_animalprotect_comp ~ challenge_exp_buc, var.equal = TRUE)$p.value, labels_exp_buc = t.test(beh_animalprotect_comp ~ labels_exp_buc, var.equal = TRUE)$p.value, labeled_exp_buc = t.test(beh_animalprotect_comp ~ labeled_exp_buc, var.equal = TRUE)$p.value, ndisprotest_exp_buc = t.test(beh_animalprotect_comp ~ ndisprotest_exp_buc, var.equal = TRUE)$p.value, disprotest_exp_buc = t.test(beh_animalprotect_comp ~ disprotest_exp_buc, var.equal = TRUE)$p.value) %>% gather("Advocacy","p_value") %>% mutate(p_fdr = p.adjust(p_value, method = "fdr", n = length(p_value))) %>% print()
Statistical Test
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
119
ttests to compare composite 'animal consumer behaviors' in people who have vs. have not experienced each (16 types total)
t.test(beh_consumer_comp ~ graphic_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp~ nongraphic_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ person_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ leaflet_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ news_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ social_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ humaneed_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ documentary_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ book_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ celebrity_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ ad_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ challenge_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ labels_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ labeled_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ ndisprotest_exp_buc, data=data, var.equal = TRUE) t.test(beh_consumer_comp ~ disprotest_exp_buc, data=data, var.equal = TRUE)
Statistical Test
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
120
prepare animal protection behaviors for stacked graph
mean_CI_protect <- data %>% group_by(advocacytype, experienced) %>% summarize(n = n(), protect_mean = mean(beh_animalprotect_comp), protect_lci = t.test(beh_animalprotect_comp, conf.level = 0.95)$conf.int[1], protect_uci = t.test(beh_animalprotect_comp, conf.level = 0.95)$conf.int[2])
Visualization
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
121
add asterisks to advocacy types that had sig different scores between experienced vs. not experienced from ttests
mean_CI_protect <- mean_CI_protect %>% mutate(advocacytype = case_when(advocacytype == "ad" ~ "Anuncio o Valla Publicitaria*", advocacytype == "book" ~ "Libro*", advocacytype == "celebrity" ~ "Celebridad*", advocacytype == "challenge" ~ "Reto de Evitar la Carne*", advocacytype == "disprotest" ~ "Protesta Disruptiva*", advocacytype == "graphic" ~ "Video Gráfico*", advocacytype == "humaneed" ~ "Educación en el Aula de Clases*", advocacytype == "labeled" ~ "Información Educativa Sobre las Etiquetas de Bienestar*", advocacytype == "labels" ~ "Etiquetas Vegano/De Base Vegetal*", advocacytype == "ndisprotest" ~ "Protesta No Disruptiva*", advocacytype == "news" ~ "Artículo de Noticias*", advocacytype == "nongraphic" ~ "Video No Gráfico*", advocacytype == "person" ~ "Divulgación Boca a Boca *", advocacytype == "social" ~"Publicación en Redes Sociales o en un Blog*", advocacytype == "documentary" ~ "Documental*", advocacytype == "leaflet" ~ "Folleto o Volante*")) mean_CI_consume <- mean_CI_consume %>% mutate(advocacytype = case_when(advocacytype == "ad" ~ "Anuncio o Valla Publicitaria*", advocacytype == "book" ~ "Libro*", advocacytype == "celebrity" ~ "Celebridad*", advocacytype == "challenge" ~ "Reto de Evitar la Carne*", advocacytype == "disprotest" ~ "Protesta Disruptiva*", advocacytype == "graphic" ~ "Video Gráfico*", advocacytype == "humaneed" ~ "Educación en el Aula de Clases*", advocacytype == "labeled" ~ "Información Educativa Sobre las Etiquetas de Bienestar*", advocacytype == "labels" ~ "Etiquetas Vegano/De Base Vegetal*", advocacytype == "ndisprotest" ~ "Protesta No Disruptiva*", advocacytype == "news" ~ "Artículo de Noticias*", advocacytype == "nongraphic" ~ "Video No Gráfico*", advocacytype == "person" ~ "Divulgación Boca a Boca *", advocacytype == "social" ~"Publicación en Redes Sociales o en un Blog*", advocacytype == "documentary" ~ "Documental*", advocacytype == "leaflet" ~ "Folleto o Volante*"))
Statistical Test
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
122
factorize advocacy type for graph
mean_CI_protect <- mean_CI_protect %>% mutate(advocacytype = factor(advocacytype, bar_order)) mean_CI_consume <- mean_CI_consume %>% mutate(advocacytype = factor(advocacytype, bar_order))
Data Variable
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
123
grouped bar graph for animal protection behaviors
ggplot(mean_CI_protect, aes(x= advocacytype, fill=experienced, y=protect_mean)) + geom_col(width = 0.8, position="dodge") + coord_flip(ylim=c(1,5)) + geom_errorbar(aes(x= advocacytype, ymin = protect_lci, ymax = protect_uci), width=0.4, colour = "black", position=position_dodge(.8)) + geom_text(aes(label = format(round(protect_mean, 1)), y = protect_uci), hjust = -0.2,# nudge_y = 2, size = 3, position = position_dodge(width = 1)) + labs(y= "Puntuación Media del Comportamiento de Protección de los Animales", x = "Tipo de Defensa", caption = "Un asterisco (*) indica que hubo una diferencia estadísticamente significativa entre los grupos (todos los ps < 0,05) después de haber corregido mediante FDR. Para más detalles sobre cómo se llevaron a cabo estos análisis, véase el Material Complementario.") + theme(legend.title = element_blank(), legend.position = "bottom", panel.background = element_rect("white"), panel.border = element_rect(fill = NA), panel.grid.major.x = element_line("grey")) + scale_fill_manual(values = c("Experimentado" = "#c47020", "No Experimentado" = "#F68D29"))
Visualization
https://osf.io/3aryn/
7AnimalProtectionandConsumerBehaviors_Spanish.R
124
We create a list of parameters to visualize and fill the labels etc to the .txt file in any table editor (Excel)
write.table(data.frame(nam=rep.par2),"params_empty.txt",row.names=F,sep="\t")
Visualization
https://osf.io/fr5ed/
03_posterior_visualization_country_contrast.R
125
plot contrasts estimates
plot(NULL,ylim=c(max(dec$y)+const1+const2,min(dec$y)-const1),type="n",xaxs="i",yaxs="i",xaxt="n",yaxt="n",xlim=xlims[[block]],bty="n") abline(h=dec$y,col=col.grid,lty=1,lwd=lwd.grid) segments(axes[[block]],max(dec$y)+const1,axes[[block]],max(dec$y)+tic+const1,lwd=lwd.ax,col=col.ax) text(axes[[block]],max(dec$y)+tic+const1+ofs,labels=axes[[block]],col=col.ax,cex=0.9,font=2) lines(range(axes[[block]]),rep(max(dec$y)+const1,2),lwd=lwd.ax,col=col.ax) segments(axes[[block]],min(dec$y)-const1,axes[[block]],max(dec$y)+const1,lwd=lwd.v,col=col.ax,lty=3) segments(0,min(dec$y)-const1,0,max(dec$y)+const1,lwd=lwd.v,col=col.ax,lty=1)
Visualization
https://osf.io/fr5ed/
03_posterior_visualization_country_contrast.R
126
Draw density polygons density areas are scaled within each block.
area<-0.25*diff(xlims[[block]]) for(i in 1:nrow(dec)){ thispost<-postsc[,dec$n[i]] dens<-density(thispost) polX<-c(dens$x,rev(dens$x)) polY<-c(dens$y,rev(-dens$y)) ar1<-abs(polyarea(polX,polY)) perc<-area/ar1 polygon(polX,polY*perc+dei$y[i],col=dec$hex[i],border=col.pol,lwd=lwd.pol)
Visualization
https://osf.io/fr5ed/
03_posterior_visualization_country_contrast.R
127
correlations between csd_mean and averaged csds for Big Five traits (mentioned in the text in Appendix A):
psych::corr.test( df2$n_csd,df2$csd_mean_n ) psych::corr.test( df2$e_csd,df2$csd_mean_e ) psych::corr.test( df2$o_csd,df2$csd_mean_o ) psych::corr.test( df2$a_csd,df2$csd_mean_a ) psych::corr.test( df2$c_csd,df2$csd_mean_c )
Data Variable
https://osf.io/tajd9/
Flip_MainDataAnalyses.R
128
correlations between variability and wellbeing measures:
names_var <- c("csd_mean_n","csd_mean_e","csd_mean_o","csd_mean_a","csd_mean_c", "sccs","csd_nob","simpson_csd","rses","swls","pa","na") psych::corr.test( df2[,names_var] )
Statistical Modeling
https://osf.io/tajd9/
Flip_MainDataAnalyses.R
129
catch situation where Sx is missing so we cannot assess response to treatment
if ( nrow( thisSx ) < 1 ) { thisRow$status.sx <- NA noFurtherFlag <- 1 ## tell rest of loop to not bother thisRow$missing.Sx <- 1 }
Data Variable
https://osf.io/5y27d/
tabulate_cases.R
130
Descriptive analysis: counting occurrence of bodybased units and coded themes. Step 1. Analyse frequencies of bodybased units of measure Ensure data is in character format
df$`Body dimension` <- as.character(df$`Body dimension`)
Data Variable
https://osf.io/fegvr/
Analysis.R
131
Repeating confirmatory analyses with extra exclusions: in addition to excluding participants who did not complete the study, and excluding timeuntilguess and arithmeticsolvingtimes more than 5 standard deviations away from the mean, this analysis also excludes participants who removed 0 tiles when guessing. Exclude participants who did not complete study
d.conf_1_2.complete <- d.conf_1_2.b[complete.cases(d.conf_1_2.b),] d.conf_3_math.complete <- d.conf_3_math.b[complete.cases(d.conf_3_math.b),]
Data Variable
https://osf.io/7vbj9/
Analyses_Exploratory.R
132
remove rows with timeuntilguess values that are more than 5 standard deviations away from the mean
agg <- aggregate(ElapsedTime_Guess ~ Guess_Number + ID_Player + Effort + Competition, data=d.conf_1_2.complete, FUN = mean) sd5.times <- mean(agg$ElapsedTime_Guess) + (5 * sd(agg$ElapsedTime_Guess)) nrow(agg[agg$ElapsedTime_Guess > sd5.times,])
Statistical Test
https://osf.io/7vbj9/
Analyses_Exploratory.R
133
remove rows with arithmeticsolvingtimes that are more than 5 standard deviations away from the mean
sd5.times <- mean(d.conf_3_math.complete$ElapsedTime_Math) + (5 * sd(d.conf_3_math.complete$ElapsedTime_Math)) nrow(d.conf_3_math.complete[d.conf_3_math.complete$ElapsedTime_Math > sd5.times,])
Statistical Test
https://osf.io/7vbj9/
Analyses_Exploratory.R
134
Exclude participants who revealed 0 tiles
d.conf_1_2.complete_no0 <- d.conf_1_2.complete[d.conf_1_2.complete$TilesRevealed > 0,] d.conf_3_math.complete_no0 <- d.conf_3_math.complete[d.conf_3_math.complete$TilesRevealed > 0,]
Data Variable
https://osf.io/7vbj9/
Analyses_Exploratory.R
135
compare simpler models with sex and sex*effort
df <- compare(m_accuracy_guess_nointer, m_accuracy_componly, m_accuracy_sex, m_accuracy_sex_effort_inter, m_accuracy_sex_effort_inter_CEinter) df <- round(df@output, 2) df %>% kable(caption = "ACCURACY") %>% kable_styling(bootstrap_options = c("striped", "condensed", "responsive"), full_width = TRUE) %>% column_spec(1:5, width = "2cm")
Statistical Modeling
https://osf.io/7vbj9/
Analyses_Exploratory.R
136
aggregate data for time elapsed per guess, by first removing duplicate times, then summing up total times per player
d.reward.agg <- aggregate(ElapsedTime_Guess ~ Effort + Competition + Sex + Guess_Number + ID_Player, data = d.reward, FUN = mean) d.reward.agg <- aggregate(ElapsedTime_Guess ~ Effort + Competition + Sex + ID_Player, data = d.reward.agg, FUN = sum)
Data Variable
https://osf.io/7vbj9/
Analyses_Exploratory.R
137
Bayes Factors load additional libraries subset data into separate objects for frequentist analyses
d.math.agg.f <- d.math.agg d.math.agg.f$Competition <- as.factor(d.math.agg.f$Competition) d.math.agg.f$ID_Player <- as.factor(d.math.agg.f$ID_Player) d.math.agg.f$Sex <- as.factor(d.math.agg.f$Sex) d.conf.agg.f <- d.conf.agg d.conf.agg.f$Competition <- as.factor(d.conf.agg.f$Competition) d.conf.agg.f$Effort <- as.factor(d.conf.agg.f$Effort) d.conf.agg.f$ID_Player <- as.factor(d.conf.agg.f$ID_Player) d.conf.agg.f$Sex <- as.factor(d.conf.agg.f$Sex)
Data Variable
https://osf.io/7vbj9/
Analyses_Exploratory.R
138
Plot AUC
ggplot(plot.data, aes(x = Target, y = AUC, color = Algorithm, fill = Algorithm, shape = Algorithm)) + geom_boxplot(width = 0.3,lwd = 1, aes(color = Algorithm, fill = Algorithm), alpha = 0.3, outlier.shape=NA, position=position_dodge(0.5)) + geom_point(position=position_jitterdodge(jitter.width = 0.1, dodge.width = 0.5), size = 0.5) + coord_flip() + geom_hline(yintercept = 0.5, color="black", linetype = "dashed", size = 1) + theme_classic() + theme(text = element_text(size = 18), axis.title.y = element_blank(), legend.position = c(0.85, 0.5)) + scale_color_manual(values = c("#440154", "#5ec962")) + scale_fill_manual(values = c("#440154", "#5ec962")) + ylim(0.25, 1) + scale_x_discrete(labels=rev(c("Sociality" = expression(paste("* ", bold("S"), "ociality")), "Deception" = expression(paste(bold("D"), "eception")), "Negativity" = expression(paste(bold("N"), "egativity")), "Positivity" = expression(paste("* p", bold("O"), "sitivity")), "Mating" = expression(paste("* ", bold("M"), "ating")), "Adversity" = expression(paste(bold("A"), "dversity")), "Intellectuality" = expression(paste("* ", bold("I"), "ntellect")), "Duty" = expression(paste("* ", bold("D"), "uty")))))
Visualization
https://osf.io/b7krz/
01_ML_bmr_resultsSummary.R
139
Step 4*: Visualization of the ROC curves for "core benchmark" for the Online Supplemental Material Duty
p_Duty = autoplot(bmr$clone(deep = TRUE)$filter(task_ids = "Duty", learner_ids = c("lasso_Duty", "rf_Duty")), type = "roc") + theme_classic() + theme(title = element_blank(), legend.position = "none", axis.title = element_text(size = 18), axis.text = element_text(size = 16)) + labs(y = "Sensitivity", x = "1 - Specifity") + geom_abline(intercept = 0, slope = 1, color = "black", linetype = "dotted", size = 0.8) + annotate(geom = "text", x = 0, y = 1, label = "Duty", hjust = 0, vjust = 1, size = 10) png("Figures_Tables/Figures/ROC_Curves/Duty.png", width = 450, height = 400) p_Duty dev.off()
Visualization
https://osf.io/b7krz/
01_ML_bmr_resultsSummary.R
140
Plot ROC curves and caculate AUC (if desired)
plotIt <- FALSE currAUC <- plotAUC(allPs,allBFs,dt,numStudies,plotIt) #currently does not save the AUCs ai <- ai+1 aucs[ai,1] <- sampleSizes[1] aucs[ai,2] <- effectSizes[2] aucs[ai,3] <- i #runNumber aucs[ai,4] <- currAUC[1] aucs[ai,5] <- currAUC[2] plotIt <- FALSE currAUC <- plotAUC(allPs2,allBFs2,dt,numStudies,plotIt) #currently does not save the AUCs aucs$aucPhacked[ai] <- currAUC[1] aucs$aucBFhacked[ai] <- currAUC[2] if (currAUC[1] != currAUC[2]) { print(paste(effectSizes,i)) print(currAUC) } } #end for i runTimes } #end for ssa allDt <- allDt[-1,] #get rid of initial dummy row that was just used to init allDt
Visualization
https://osf.io/hzncs/
Witt_SDT_Simulations_OptionalStopping.R
141
Stacked plots by SDT outcome Plot by whether hacked or not plot hit, FA, miss, corrRejection for each effect size/sample size combo
ap <- aggregate(hits ~ criterion+sampleSize+effectSizes+withHack, data=allDt,mean) ap2 <- aggregate(fa ~ criterion+sampleSize+effectSizes+withHack, data=allDt,mean) hFA <- merge(ap,ap2,by=c("criterion","sampleSize","effectSizes","withHack")) ap2 <- aggregate(miss ~ criterion+sampleSize+effectSizes+withHack, data=allDt,mean) hFA <- merge(hFA,ap2,by=c("criterion","sampleSize","effectSizes","withHack")) ap2 <- aggregate(corrRej ~ criterion+sampleSize+effectSizes+withHack, data=allDt,mean) hFA <- merge(hFA,ap2,by=c("criterion","sampleSize","effectSizes","withHack")) hFA$hits <- hFA$hits / numStudies hFA$fa <- hFA$fa / numStudies hFA$corrRej <- hFA$corrRej / numStudies hFA$miss <- hFA$miss / numStudies
Visualization
https://osf.io/hzncs/
Witt_SDT_Simulations_OptionalStopping.R
142
plot BF vs pvalues
plot(log(allPs),log(allBFs),bty="l")
Visualization
https://osf.io/hzncs/
Witt_SDT_Simulations_OptionalStopping.R
143
Create matrix of input variables with populationlevel effects:
C_datMM <- model.matrix( as.formula(paste("~", paste(vpreds_noInt, collapse = " + "))), data = C_dat )
Data Variable
https://osf.io/emwgp/
ppfs.R
144
Set the latent regression coefficients for booklet 1, 5, 7 and 10 for reading and booklet 4, 6, 9 and 11 for science to 0
betas <- data.frame(var = c(which(names(con_dat) %in% paste0("bookid.", c(1, 5, 7, 10))), which(names(con_dat) %in% paste0("bookid.", c(4, 6, 9, 11)))), dim = rep(2:3, each = 4), value = 0) betas <- data.frame(var = c(which(names(con_dat) %in% paste0("bookid.", c(1, 5, 7, 10))), which(names(con_dat) %in% paste0("bookid.", c(4, 6, 9, 11)))), dim = rep(2:3, each = 4), value = 0)
Data Variable
https://osf.io/8fzns/
4H_PV_helper.R
145
Compute latent regression of the latent ability on the conditioning variables (excl. the first column which is the student ID)
latreg <- tam.latreg(likeli, Y = con_dat[, -1], pid = pid.sele, control = list(maxiter = iter.2, acceleration = "Ramsay"), beta.fixed = as.matrix(betas)) } else { latreg <- tam.latreg(likeli, Y = con_dat[, -1], pid = pid.sele, control = list(maxiter = iter.2, acceleration = "Ramsay")) } latreg <- tam.latreg(likeli, Y = con_dat1[, -1], pid = pid.sele, control = list(maxiter = iter.2, acceleration = "Ramsay"), beta.fixed = as.matrix(betas)) } else { latreg <- tam.latreg(likeli, Y = con_dat1[, -1], pid = pid.sele, control = list(maxiter = iter.2, acceleration = "Ramsay")) }
Statistical Modeling
https://osf.io/8fzns/
4H_PV_helper.R
146
Draw 5 plausible values for each student out of the resulting distribution which is assumed to be normal distributed
pvs <- tam.pv(latreg, nplausible = 5, normal.approx = T, samp.regr = samp.regr.opt) } else { pvs <- tam.pv(latreg_md, nplausible = 5, normal.approx = T, samp.regr = samp.regr.opt) } else {
Statistical Modeling
https://osf.io/8fzns/
4H_PV_helper.R
147
Extract the regression coefficients of the conditioning variables, because they are fixed for the core domains in the next step at that value
reg.coefs <- cbind(rep(1:dim(latreg$beta)[1], 3), rep(1:3, each = dim(latreg$beta)[1]), c(latreg$beta[, 1], latreg$beta[, 2], latreg$beta[, 3]))
Statistical Modeling
https://osf.io/8fzns/
4H_PV_helper.R
148
Extract IRT likelihood of the second model (math, read and scie plus digital domains)
likeli_md <- IRT.likelihood(mod2)
Statistical Modeling
https://osf.io/8fzns/
4H_PV_helper.R
149
Compute latent regression of the latent ability on the conditioning variables (excl. the first column which is the student ID). But this time only the regression coefficients of the digital domains are computed freely. The rest is fixed at the values of the first model
latreg_md <- tam.latreg(likeli_md, Y = con_dat[, -1], pid = pid.sele, control = list(maxiter = iter.2, acceleration = "Ramsay"), beta.fixed = reg.coefs)
Statistical Modeling
https://osf.io/8fzns/
4H_PV_helper.R
150
Calculate the number of surveys completed per person
N <- as.data.frame(table(dat$id)) names(N) <- c("id", "N") dat <- merge(dat, N, by = "id", all.x = T) table(dat$N, useNA = "ifany")
Data Variable
https://osf.io/nxyh3/
01a_DataPrep_Study1.R
151
DOWNSAMPLING Loop to downsample data based on minimum time and distance between points Time parameter is set in "while(diff < time in minutes | dist > in meters)" downsampled pts that were less than 60min apart unless frog moved more than 20m during the time window
tracks_dsmpl <- data.frame() ids <- unique(tracks$id) for(i in ids) { traj = subset(tracks, tracks$id == i) for(i in 1:nrow(traj)) { diff <- difftime(traj$dt[i+1], traj$dt[i], units = "mins") delta_x <- traj$x_utm[i+1] - traj$x_utm[i] delta_y <- traj$y_utm[i+1] - traj$y_utm[i] dist <- sqrt(delta_x^2+delta_y^2) if(is.na(diff)) {break} while(diff <= 60 & dist < 20) { traj <- traj[-(i+1),] diff <- difftime(traj$dt[i+1], traj$dt[i], units = "mins") delta_x <- traj$x_utm[i+1] - traj$x_utm[i] delta_y <- traj$y_utm[i+1] - traj$y_utm[i] dist <- sqrt(delta_x^2+delta_y^2) if(is.na(diff)) {break} } } tracks_dsmpl <- bind_rows(tracks_dsmpl, traj) } tracks_dsmpl <- tracks_dsmpl %>% group_by(id) %>% mutate(time_diff = difftime(dt, lag(dt, n = 1L), units = "min"), delta_x = x_utm - lag(x_utm, n = 1L), delta_y = y_utm - lag(y_utm, n = 1L), dist = sqrt(delta_x^2+delta_y^2)) tracks <- tracks_dsmpl
Data Variable
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
152
Remove first row (tagging date) and last row (untagging day) per individual Sort the daily, group by id and add index of id+date as a new variable
daily %>% arrange(id) %>% group_by(id) %>% mutate(id_day = paste(id, date)) -> daily_grouped
Data Variable
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
153
Join with summary stats
tracks_sum <- left_join(tracks_sum, duration) tracks_sum <- left_join(tracks_sum, relocs) tracks_sum <- left_join(tracks_sum, days_tracked)
Visualization
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
154
Calculate average daily movement by behavioral category per individual for plotting
daily_beh_mean <- daily_beh %>% group_by(id, behavior) %>% dplyr::summarize(daily_dist = mean(daily_dist), max_dist = mean(max_dist), sex = first(sex), species = first(species))
Visualization
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
155
MCPs home range with 95% minimum convex ploygon (MCP95)
mcp95<- mcp(data_sp["id"], percent = 95, unout = "m2")
Data Variable
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
156
95% contour with hpi pluging method Define countour level
cont=c(95)
Visualization
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
157
Initiate empty df to store the metadarta
allfrogs_info <- data.frame(ID = numeric(0), Cont = numeric(0), frog_id = character(0)) allfrogs_info <- data.frame(ID = numeric(0), Cont = numeric(0), frog_id = character(0))
Data Variable
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
158
BOXPLOTS: DAILY MOVEMENT FIGURE EXPORT: daily movement by sex
daily_plot <-tracks_sum%>% ggplot(aes(x= sex, y=log(mean_cumul), fill = species)) + theme_bw(20) + geom_boxplot(width= 0.6, outlier.shape = NA) + geom_jitter(aes(group = sex), position=position_jitterdodge(0.4), shape = 21, stroke = 1, color = "black", size = 2, alpha = 0.6) + scale_color_manual(values=c("black", "black")) + scale_fill_manual(values=c("#E7B800", "#0072B2","#FC4E07")) + theme(legend.position="none") + labs(y = "ln Daily movement (m)") + facet_wrap(~species, labeller = labeller(species=c("afemo" = "A. femoralis", "dtinc" = "D. tinctorius", "osylv" = "O. sylvatica"))) + theme(axis.title.x = element_blank(), axis.text.x = element_text(color = "black", size = 18), strip.text = element_text(face = "italic"), aspect.ratio = 4) + scale_x_discrete(labels= c("F", "M"), expand = expansion(add = 1)) daily_plot
Visualization
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
159
Daily movement by sex and species, log transformed violin plots
ggplot(daily_select, aes(x= sex, y=log(daily_dist))) + theme_bw(20) + geom_violin(aes(fill = species)) + scale_fill_manual(values=c("#E7B800", "#0072B2","#FC4E07")) + theme(legend.position="none") + labs(y = "ln Daily movement (m)") + geom_jitter(aes(shape=sex, fill = species), position=position_jitterdodge(0.2), size = 4, alpha=0.2) + facet_wrap(~species, labeller = labeller( species=c(afemo ="A. femoralis", dtinc = "D. tinctorius", osylv = "O. sylvatica"))) + theme(axis.title.x = element_blank(), axis.text.x = element_text(color = "black", size = 18), strip.text = element_text(face = "italic")) + scale_x_discrete(labels= c("F", "M"))
Visualization
https://osf.io/3bpn6/
all_spaceuse_dataproc.R
160
convert attributes from character to numeric
for(i in 1:3) { data[[paste0("opt",i,"cost")]] <- as.numeric(substring(data[[paste0("opt",i,"cost")]], 3, 7)) data[[paste0("opt",i,"sides")]] <- as.numeric(substring(data[[paste0("opt",i,"sides")]], 1, 2)) data[[paste0("opt",i,"deliveryTime")]] <- as.numeric(substring(data[[paste0("opt",i,"deliveryTime")]], 2, 3)) }
Data Variable
https://osf.io/tbczv/
exp1aDead-MNL-SAT-A.r
161
Step 9: Create time difference variable, and the final dependent variable (media_success) calculating the time difference between each documentpair
result$date_diff <- as.Date(as.character(result$news_date), format="%Y-%m-%d")- as.Date(as.character(result$date), format="%Y-%m-%d")
Data Variable
https://osf.io/hfy4k/
prep.analysis.data.R
162
Remove trials with RT < 4710 ms (i.e., presses before disambiguating information in sentence)
dataset_exp1 <- dataset_exp1[which(dataset_exp1$rt > 4710), ]
Data Variable
https://osf.io/37rfb/
prediction_analyses.R
163
Add 'trackloss' column (if not looking at IA_1 or IA_2, then trackloss 1)
dataset_exp1 <- within(dataset_exp1, { trackloss <- ifelse(average_target_sample_count_proportion == 0 & average_distractor_sample_count_proportion == 0, 1, 0) })
Data Variable
https://osf.io/37rfb/
prediction_analyses.R
164
Ttests (using participantaveraged data)
num_sub_exp1 <- length(unique((eyetrackingr.data.exp1$participant_number))) threshold_t_exp1 <- qt(p = 1 - .05 / 2, df = num_sub_exp1 - 1) # Pick threshold for t based on alpha = .05, two-tailed df_timeclust_exp1 <- make_time_cluster_data(response_time_exp1, test = "t.test", paired = TRUE, predictor_column = "trial_condition_new", threshold = threshold_t_exp1 )
Statistical Test
https://osf.io/37rfb/
prediction_analyses.R
165
MSE for each participant
MSE.Sys.VAR.i = lapply(1:fold, function(k) MSE.k[[k]]$MSE.ki)
Data Variable
https://osf.io/rs6un/
MSE.VAR.Sys.R
166
use mutate and the dffit fit information to make the data for mal
mal_fit <- fit_num %>% mutate(fit = (s)/(1.196506 + s)) dex_fit <- fit_num %>% mutate(fit = (s)/(62.736799 + s)) lac_fit <- fit_num %>% mutate(fit = (s)/(43.839063 + s)) maldex_fit <- fit_num %>% mutate(fit = (s)/(7.253739 + s))
Statistical Modeling
https://osf.io/9e3cu/
titration_answers.R
167
Logistic Mixed Effects Regression for Original Critical Targets
log_reg_data <- cards_long %>% filter(card == "card_1" | card == "card_3" | card == "card_6" | card == "card_7" | card == "card_9" | card == "card_14" ) %>% mutate( distance = case_when( card == "card_1" | card == "card_3" | card == "card_6" ~ 0, card == "card_7" | card == "card_9" | card == "card_14" ~ 1 ), comf_acquaint = scale(comf_acquaint, scale = FALSE), comf_close = scale(comf_close, scale = FALSE), approp_acquaint = scale(approp_acquaint, scale = FALSE), approp_close = scale(approp_close, scale = FALSE) ) log_reg_data$amit_wording <- factor(log_reg_data$amit_wording, levels = c("original", "new")) h1_melogreg <- glmer(choice ~ distance + amit_wording + (1 + distance|id) + (1|card), data = log_reg_data, family = binomial(link = "logit")) h1_melogreg_inter <- glmer(choice ~ distance * amit_wording + (1 + distance|id) + (1|card), data = log_reg_data, family = binomial(link = "logit")) h1_model_comparison <- anova(h1_melogreg, h1_melogreg_inter)
Statistical Modeling
https://osf.io/bkuwa/
main_analysis_code.R
168
Highest density interval ' ' This is a function that will calculate the highest density interval from a ' posterior sample. ' ' The default is to calcualte the highest 95 percent interval. It can be used ' with any numeric vector instead of having to use one of the specific MCMC ' classes. This function has been adapted from John K. Kruschke (2011). Doing ' Bayesian Data Analaysis: A Tutorial with R and BUGS. ' ' @param x Numeric vector of a distribution of data, typically a posterior ' sample ' @param prob Width of the interval from some distribution. Defaults to `0.95`. ' @param warn Option to turn off multiple sample warning message Must be in the ' range of `[0,1]`. ' @return Numeric range ' @export ' @examples ' x < qnorm(seq(1e04, .9999, length.out1001)) ' hdi_95 < hdi(x, .95) ' hdi_50 < hdi(x, .50) ' ' hist(x, br50) ' abline(vhdi_95, col"red") ' abline(vhdi_50, col"green") ' ' x < exp(seq(pi * (1 (1/16)), pi, len 1000)) ' x < c(x, rev(x)[1]) ' x < c(x, x) ' plot(sort(x), type"l") ' plot(density(x, adjust0.25)) ' abline(vhdi(x, p.49), col2) ' abline(vhdi(x, p.50), col3)
hdi <- function(x, prob=0.95, warn=TRUE) { if (anyNA(x)) { stop("HDI: ", "x must not contain any NA values.", call.=FALSE) } N <- length(x) if (N < 3) { if (warn) { warning("HDI: ", "length of `x` < 3.", " Returning NAs", call.=FALSE) } return(c(NA_integer_, NA_integer_)) } x_sort <- sort(x) window_size <- as.integer(floor(prob * length(x_sort))) if (window_size < 2) { if (warn) { warning("HDI: ", "window_size < 2.", " `prob` is too small or x does not ", "contain enough data points.", " Returning NAs.", call.=FALSE) } return(c(NA_integer_, NA_integer_)) } lower <- seq_len(N - window_size) upper <- window_size + lower
Statistical Modeling
https://osf.io/nd9yr/
ordinal_helper_functions.R
169
test whether Condition predicts exclusion probability
Cond_exclude <- table(data$Cond, data$exclude) chisq.test(Cond_exclude) rm(Cond_exclude)
Data Variable
https://osf.io/sb3kw/
Study2B_analyses.R
170
Fix the following script by: Take our dataset (df) , and then group it by our explanatory variable (Condition), and then Summarise this dataset by Creating a new variable in this summary dataset called Mean Mean takes the mean of our Response variable, and ignores NA. Repeat for standard deviation. We've added on one more column in our summary dataset N. N is the number of observations per group. To do so, we want to add up How many times our DV is not missing (per group). When you think it is a complete, save the script. The name of the script above will go from red to black. That is how you know you've saved it. You can use ctrl+S to save, or click the floppy disk above. We want to know how the means and standard deviations and Ns of Hire_Rating differed.
sum_hire <-
Data Variable
https://osf.io/9vr6q/
summarise2.R
171
Calculate R squared for each model
r.squaredGLMM(m1) #indegree r.squaredGLMM(m2) #outdegree r.squaredGLMM(m3) #betweenness r.squaredGLMM(m4) #outcloseness r.squaredGLMM(m5) #incloseness r.squaredGLMM(m6) #local clustering r.squaredGLMM(m7) #average shortest path length r.squaredGLMM(m8) #eigenvector r.squaredGLMM(m9) #outstrength r.squaredGLMM(m10) #instrength
Statistical Test
https://osf.io/wc3nq/
6) agr_summer_model.R
172
split the data into two halves
d$ItemNo = parse_number(as.character(d$ItemNo)) de = d %>% filter(ItemNo %% 2 == 0) do = d %>% filter(ItemNo %% 2 == 1)
Data Variable
https://osf.io/cd5r8/
R_Code_Buffinton_MorganShort.R
173
Zscore transform RTs, compute zRT difference, remove NAs for the 2 groups
de <- de |> group_by(Subject) |> mutate (zRT = scale(Reaction.Time, center = T, scale = T), meanRT = mean(Reaction.Time)) |> select (1,2,4,5) |> pivot_wider(names_from = Condition, values_from = zRT) |> mutate (zrt.diff = R-P) |> filter(!is.na(zrt.diff)) do <- do |> group_by(Subject) |> mutate (zRT = scale(Reaction.Time, center = T, scale = T), meanRT = mean(Reaction.Time)) |> select (1,2,4,5) |> pivot_wider(names_from = Condition, values_from = zRT) |> mutate (zrt.diff = R-P) |> filter(!is.na(zrt.diff)) de <- de |> group_by(Subject) |> mutate (zRT = scale(Reaction.Time, center = T, scale = T), meanRT = mean(Reaction.Time)) |> select (1,2,4,5) |> pivot_wider(names_from = Condition, values_from = zRT) |> mutate (zrt.diff = R-P) |> filter(!is.na(zrt.diff))
Data Variable
https://osf.io/cd5r8/
R_Code_Buffinton_MorganShort.R
174
fit models for even and odd numbered items (Bayesian & Generalized LMM)
m_de = blmer(-1/Reaction.Time ~ Condition + (1+Condition|Subject) + (1+Condition|ItemNo), data = de, control=lmerControl(optimizer = "nloptwrap", optCtrl = list(algorithm = "NLOPT_LN_NELDERMEAD", maxit = 2e5))) m_do = blmer(-1/Reaction.Time ~ Condition + (1+Condition|Subject) + (1+Condition|ItemNo), data = do, control=lmerControl(optimizer = "nloptwrap", optCtrl = list(algorithm = "NLOPT_LN_NELDERMEAD", maxit = 2e5))) m_de = blmer(-1/Reaction.Time ~ Condition + (1+Condition|Subject) + (1+Condition|ItemNo), data = de, control=lmerControl(optimizer = "nloptwrap", optCtrl = list(algorithm = "NLOPT_LN_NELDERMEAD", maxit = 2e5))) m_do = blmer(-1/Reaction.Time ~ Condition + (1+Condition|Subject) + (1+Condition|ItemNo), data = do, control=lmerControl(optimizer = "nloptwrap", optCtrl = list(algorithm = "NLOPT_LN_NELDERMEAD", maxit = 2e5)))
Statistical Modeling
https://osf.io/cd5r8/
R_Code_Buffinton_MorganShort.R
175
fit lmm to the whole dataset
m <- blmer(-1/Reaction.Time ~ Condition + (1+Condition|Subject) + (1+Condition|ItemNo), data = d_trimmed, control=lmerControl(optimizer = "nloptwrap", optCtrl = list(algorithm = "NLOPT_LN_NELDERMEAD", maxit = 2e5))) summary(m)
Statistical Modeling
https://osf.io/cd5r8/
R_Code_Buffinton_MorganShort.R
176
calculate shared and unique variance
x1 = rsq - rx2y^2 x2 = rsq - rx1y^2 x1_x2 = rsq - x1 - x2 c(x1, x2, x1_x2, rsq) } semipartial_corr = function(r12, r13, r23) {
Statistical Modeling
https://osf.io/sqfnt/
Goal
177
create empty df
df <- data.frame( `rx1y` = numeric(), `rx2y` = numeric(), `rx1x2` = numeric(), `rx1y_x2` = numeric(), `rx2y_x1` = numeric(), `x1` = numeric(), `x2` = numeric(), `Common` = numeric(), `Total` = numeric() )
Data Variable
https://osf.io/sqfnt/
Goal
178
> Left panel: QQplot (uniform distribution) > Right panel: Residuals against predicted values;; shaded (due to sample size) with extreme residuals colored red, and 3) MAIN EFFECTS OF KEY VARIABLES AspElevForm
emmip(PrefRate, ~AspElevForm, type = "response", CIs = TRUE) (emm <- emmeans(PrefRate, specs = ~AspElevForm, type = "response")) pairs(emm)
Visualization
https://osf.io/2sz48/
Model_Preference.R
179
Operator on x, rescale to be in y unit
Hlist <- list() for(i in seq(nt)){ Hlist[[paste0("H",i)]] <- with(data, t(apply(data[[i]]$sdfp, 1, function(x) x*data[[i]]$prior)) %*% A) } H<-bdiag(unlist(Hlist)) strl <- function(x){ x[x < max(x)/1000] <- 0 return(x) } H <- as(t(apply(H, 1, strl)), 'dgCMatrix') #Make sparser
Data Variable
https://osf.io/53w96/
INLA_GHG_GMD.R
180
Creates the variable 'distances', a 540 X 3 distance matrix between stimuli and category prototypes.
distances = as.matrix(pdist(coordinates, prototypes))
Data Variable
https://osf.io/hrf5t/
runPrototype.R
181
calculate similarity coefficient (mean of signif_line$similarity.scores)
mean(signif_line$similarity.scores[,2]) mean(coef_line$similarity.scores[,2]) mean(varimp_line$similarity.scores[,2])
Statistical Modeling
https://osf.io/3gfqn/
VADIS_genitives_outer_circle.R
182
Calculate Priestley Taylor Ep
data$PT.Ep <- 1.26/lambda.w*deltav/(deltav+rho.air)*Rn
Data Variable
https://osf.io/5ezfk/
ETfun.R
183
plot hit, FA, miss, corrRejection for each effect size/sample size combo
ap <- aggregate(hits ~ criterion+sampleSize+effectSizes, data=allDt,mean) ap2 <- aggregate(fa ~ criterion+sampleSize+effectSizes, data=allDt,mean) hFA <- merge(ap,ap2,by=c("criterion","sampleSize","effectSizes")) ap2 <- aggregate(miss ~ criterion+sampleSize+effectSizes, data=allDt,mean) hFA <- merge(hFA,ap2,by=c("criterion","sampleSize","effectSizes")) ap2 <- aggregate(corrRej ~ criterion+sampleSize+effectSizes, data=allDt,mean) hFA <- merge(hFA,ap2,by=c("criterion","sampleSize","effectSizes")) hFA$hits <- hFA$hits / numStudies hFA$fa <- hFA$fa / numStudies hFA$corrRej <- hFA$corrRej / numStudies hFA$miss <- hFA$miss / numStudies
Visualization
https://osf.io/hzncs/
Witt_SDT_Simulations_SeveralTests.R
184
plot errors only (FA and misses, separately) Plot ROC dist
aa <- aggregate(ROCdist ~ criterion, data=allDt,mean) ab <- aggregate(ROCdist ~ criterion, data=allDt,sd) ab$ci <- qnorm(.975) * ab$ROCdist / sqrt(numStudies) titleText <- ifelse(length(sSizesAll) < 2, paste("N=",sSizesAll[1],"d=",effSzAll[1]), paste("firstES:",effSzAll[1])) plot(seq(1:length(crits)),aa$ROCdist,bty="l",xaxt="n",xlab="Criterion for Statistical Significance",ylab="Distance to Perfection",pch=19,col=rainbow(length(crits)),cex=2,ylim=c(0,max(aa$ROCdist)+max(ab$ci)),main = titleText) axis(side=1,at=seq(1:length(crits)),labels = crits) for(i in 1:length(crits)) { segments(i,aa$ROCdist[i] - ab$ci[i],i,aa$ROCdist[i]+ab$ci[i]) }
Visualization
https://osf.io/hzncs/
Witt_SDT_Simulations_SeveralTests.R
185
calculate the variance and bias for the log(DR)
log.var <- ((res$se.tau2)^2)/4*1/(sum(w.star))^2*(sum(1/(res$vi + res$tau2)^2))^2 log.sd <- sqrt(log.var) bias <- 1/2*(res$se.tau2)^2*(1/2/sum(w.star)^2 - 1/sum(w.star)*sum(1/(res$vi + res$tau2)^3))
Statistical Modeling
https://osf.io/gwn4y/
Hospital_Stay_of_Stroke_Patients_forest_plot.R
186
create the forest plot including extra information (study names, weights, observed effects and associated variances).
f <- forest(res, col = "blue", border = "blue", ylim = c(-8,12), xlim = c(-4,5), pch = 19, slab = dat$study, showweights = FALSE, addfit = FALSE, refline = FALSE, ilab = cbind(paste(format(round(res$yi,2), nsmall = 2)) , paste(format(round(res$vi, 2), nsmall = 2)), paste(format(round(res$ni, 2))), paste(format(weights, nsmall = 2)), paste(format(weights.f, nsmall = 2))) , ilab.xpos = c(-3.15,-2.75, -2.35,-1.5,-0.75), ilab.pos = 2, efac = 0, digits = 2)
Visualization
https://osf.io/gwn4y/
Hospital_Stay_of_Stroke_Patients_forest_plot.R
187
STEP 2: Filter the data in overall.data table according to preregistered inclusion criteria replace 0 with NA in outcome column
overall.data$LTScreenOut[overall.data$LTScreenOut == 0] <- NA #replace NAs with 0 for averaging in the next steps overall.data$LTObjectOut[overall.data$LTObjectOut == 0] <- NA #replace NAs with 0 for averaging in the next steps overall.data$FirstLookDurationObjectOut[overall.data$FirstLookDurationObjectOut == 0] <- NA #replace NAs with 0 for averaging in the next steps
Data Variable
https://osf.io/mp9td/
TablePrep_Third.R
188
fit indices function for the SEM and FA analyses
fa.CFI<-function(x){ nombre<-paste(x,"CFI",sep = ".") nombre<- ((x$null.chisq-x$null.dof)-(x$STATISTIC-x$dof))/(x$null.chisq-x$null.dof) return(nombre) }
Statistical Modeling
https://osf.io/bhrwx/
evaluation_data_analysis.R
189
check Descriptive statistics gender breakdown
df <- eval_data %>% group_by(gender) %>% summarise(counts = n()) df
Statistical Test
https://osf.io/bhrwx/
evaluation_data_analysis.R
190
Decompose variance Estimate multilevel model (without predictors)
m_logistic <- lmer(estimate ~ 1 + (1|controls) + (1|age) + (1|year) + (1|age:year) + (1|age:controls) + (1|year:controls), data = results %>% rename(age = age_group))
Statistical Modeling
https://osf.io/m72gb/
analysis_privacysetting.r
191
Visualize four first principal components
fig.df <- data.frame(HR = c(FPCAdense$phi[,1], FPCAdense$phi[,2], FPCAdense$phi[,3], FPCAdense$phi[,4]), pc = c(rep(1, 51), rep(2, 51), rep(3, 51), rep(4, 51)), Time= rep(FPCAdense$workGrid, 4)) all.four <- ggplot(fig.df, aes(x=Time, y=HR))+geom_line()+ geom_vline(xintercept=.20, linetype="dashed")+ facet_grid(.~pc)+ theme_bw() all.four ggsave("../figures/figure5.png", width=6, height=4)
Visualization
https://osf.io/qj86m/
8_fda_socaccount.R
192
SPATIOTEMPORAL VARIABLES AND HOMING SUCCESS Join territory center pts to the dataframe to calculate translocation distance and homing success
trajectory <- trajectory.df %>% left_join(territory_centers, by = "id")%>% mutate(trans_dist = sqrt((x_home - first(x_utm))^2+(y_home - first(y_utm))^2), trans_group = ifelse(trans_dist > 100, "200m", "50m"))%>% dplyr::select(id, sex, trans_group, trans_dist, dt, x_utm, y_utm, x_home, y_home, time_lag_min = time_diff, dist)
Data Variable
https://osf.io/3bpn6/
dt_homing_dataproc.R
193
Tally the number relocations for each individual per day
trajectory.df %>% group_by(id, date) %>% count(id) -> relocs.perday
Data Variable
https://osf.io/3bpn6/
dt_homing_dataproc.R
194
Plot full circular info by sex start with blank plot, then add groupspecific points
dev.new();; par(mai = c(1, 1, 0.1,0.1)) par(mar=c(0.5, 0.5, 0.5, 0.5)) pdf("dt_circ_50m.pdf") plot(corient50_m, bg = rgb(0, 0.749, 0.769), pch = 21, cex = 1.5, lwd = 2, stack = TRUE, bin = 60, xlim = c(-1.2, 1.2), ylim = c(-1.2, 1.2), sep = 0.05, shrink = 1, tcl.text = -0.125, control.circle=circle.control(lwd = 2)) ticks.circular(circular(seq(0,2*pi,pi/2)), tcl=0.075) par(new = T) plot(corient50_f, bg=rgb(0.973, 0.463, 0.427), pch = 21, cex=1.5, lwd = 2, stack=T, bins=60, sep = -0.05, shrink= 1.3, axes = FALSE, control.circle=circle.control(lwd = 1)) arrows.circular(mean(corient50_m), y = rho.circular(corient50_m), col = rgb(0, 0.749, 0.769), lwd = 5) arrows.circular(mean(corient50_f), y = rho.circular(corient50_f), col = rgb(0.973, 0.463, 0.427), lwd = 5) par(new = T) plot(corient50_m, col = NA, shrink= 2.5, axes = FALSE, control.circle=circle.control(lty = 2, lwd = 1)) ticks.circular(circular(seq(0,2*pi,pi/8)), tcl=0.2) lines(density.circular(corient50_m, bw=30), shrink= 1, col = rgb(0, 0.749, 0.769, 0.7), lwd=2, lty=1) lines(density.circular(corient50_f, bw=30), col = rgb(0.973, 0.463, 0.427, 0.7), lwd=2, lty=1) dev.new();; par(mai = c(1, 1, 0.1,0.1)) par(mar=c(0.5, 0.5, 0.5, 0.5)) pdf("dt_circ_200m.pdf") plot(corient200_m, bg = rgb(0, 0.749, 0.769), pch = 21, cex = 1.5, lwd = 2, stack = TRUE, bin = 60, xlim = c(-1.2, 1.2), ylim = c(-1.2, 1.2), sep = 0.05, shrink = 1, tcl.text = -0.125, control.circle=circle.control(lwd = 2)) ticks.circular(circular(seq(0,2*pi,pi/2)), tcl=0.075) par(new = T) plot(corient200_f, bg=rgb(0.973, 0.463, 0.427), pch = 21, cex=1.5, lwd = 2, stack=T, bins=60, sep = -0.05, shrink= 1.3, axes = FALSE, control.circle=circle.control(lwd = 1)) arrows.circular(mean(corient200_m), y = rho.circular(corient200_m), col = rgb(0, 0.749, 0.769), lwd = 5) arrows.circular(mean(corient200_f), y = rho.circular(corient200_f), col = rgb(0.973, 0.463, 0.427), lwd = 5) par(new = T) plot(corient200_m, col = NA, shrink= 2.5, axes = FALSE, control.circle=circle.control(lty = 2, lwd = 1)) ticks.circular(circular(seq(0,2*pi,pi/8)), tcl=0.2) lines(density.circular(corient200_m, bw=30), shrink= 1, col = rgb(0, 0.749, 0.769, 0.7), lwd=2, lty=1) lines(density.circular(corient200_f, bw=30), col = rgb(0.973, 0.463, 0.427, 0.7), lwd=2, lty=1)
Visualization
https://osf.io/3bpn6/
dt_homing_dataproc.R
195
Helper function to prepare raw keyboard data ' ' @author F. Bemmann ' @family Preprocessing function ' @description this function unfolds json data into one column per keyvalue pair ' @export
parseJsonColumnSensing = function(df, column_name){ parseJsonColumn = function(x){ str_c("[ ", str_c(x, collapse = ",", sep=" "), " ]") %>% jsonlite::fromJSON(flatten = T) %>% as_tibble() } df2 = df %>% select(user_uuid,client_event_id,!!column_name) %>% filter(!is.na(!!rlang::sym(column_name))) %>% map_dfc(.f = parseJsonColumn) %>% distinct() colnames(df2)[1:2] = c("user_uuid", "client_event_id") df = left_join(df, df2, by = c("user_uuid", "client_event_id")) df[,column_name] = NULL return(df) }
Data Variable
https://osf.io/b7krz/
helper_JsonFormat.R
196
Separate ANOVAs for each subscale Pairwise ttests with adjusted alpha level of .05/4 .0125 (because there are four relevant comparisons)
summary(aov(value~Fan*time+Error(VP_t0/(time)),data=dat[dat$scale=="Attentiveness",])) pairwise.t.test(dat[dat$scale=="Attentiveness",]$value,paste(dat[dat$scale=="Attentiveness",]$Fan,dat[dat$scale=="Attentiveness",]$time),p.adj="none")
Statistical Test
https://osf.io/xcthg/
0-Script.R
197
Models .. Contrasts (Reverse) Helmert for cond speech, cond mask Treatment for speech_mask Sum for group Factor variables
df <- df %>% mutate( group = factor(group), # oc, pd gender = factor(intake_gender), # f, m cond_mask = factor(cond_mask, levels = c("nm","sm","kn")), cond_speech = factor(cond_speech, levels = c("habitual","clear","loud")) ) %>% mutate(speech_mask = paste(cond_speech,cond_mask,sep="_")) levels(df$group) # oc, pd levels(df$gender) # f, m levels(df$cond_speech) # habitual, clear, loud levels(df$cond_mask) # nm, sm, kn
Statistical Modeling
https://osf.io/5s34w/
simpd_helper.R
198
4) Descriptives and data inspection total duration on survey
stat.desc(working_file %>% group_by(pp) %>% slice(1) %>% pull(duration))
Data Variable
https://osf.io/g8kbu/
dataAnalysisRewardAppsSurvey.R
199
bayesian ttests
ttestBF(wide$high, wide$neutral, paired = TRUE) ttestBF(wide$low, wide$neutral, paired = TRUE) ttestBF(wide$high, wide$low, paired = TRUE) formatC(3057348376, format = "e", digits = 2)
Statistical Test
https://osf.io/g8kbu/
dataAnalysisRewardAppsSurvey.R
200
Confidence interval as a vector
result <- c("lower" = vec_mean - error, "upper" = vec_mean + error) return(result) }
Statistical Modeling
https://osf.io/92e6c/
fill_summary_table.R