code
stringlengths 11
306k
| docstring
stringlengths 1
39.1k
| func_name
stringlengths 0
97
| language
stringclasses 1
value | repo
stringclasses 959
values | path
stringlengths 8
160
| url
stringlengths 49
212
| license
stringclasses 4
values |
---|---|---|---|---|---|---|---|
pub fn create_dir_if_not_exists(&self) -> Result<(), Error> {
if !self.path.exists() {
fs::create_dir(&self.path)
} else {
Ok(())
}
} | Creates missing directories. | create_dir_if_not_exists | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn max_inbound_peers(&self) -> u32 {
self.max_peers.saturating_sub(self.max_outbound_peers)
} | Gets maximum inbound peers. | max_inbound_peers | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn max_outbound_peers(&self) -> u32 {
self.max_outbound_peers
} | Gets maximum outbound peers. | max_outbound_peers | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn max_send_buffer(&self) -> usize {
self.max_send_buffer.unwrap_or(DEFAULT_SEND_BUFFER)
} | Gets maximum send buffer size. | max_send_buffer | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn channel_size(&self) -> usize {
self.channel_size.unwrap_or(DEFAULT_CHANNEL_SIZE)
} | Gets maximum send buffer size. | channel_size | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn whitelist_peers(&self) -> Vec<Multiaddr> {
self.whitelist_peers.clone()
} | Gets the list of whitelist peers. | whitelist_peers | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn bootnodes(&self) -> Vec<Multiaddr> {
self.bootnodes.clone()
} | Gets a list of bootnodes. | bootnodes | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn outbound_peer_service_enabled(&self) -> bool {
self.connect_outbound_interval_secs > 0
} | Checks whether the outbound peer service should be enabled. | outbound_peer_service_enabled | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn dns_seeding_service_enabled(&self) -> bool {
!self.dns_seeds.is_empty()
} | Checks whether the DNS seeding service should be enabled. | dns_seeding_service_enabled | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
const fn default_reuse() -> bool {
true
} | By default, using reuse port can make any outbound connection of the node become a potential
listen address, which will help the robustness of our network | default_reuse | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
const fn default_reuse_tcp_with_ws() -> bool {
true
} | By default, allow ckb to upgrade tcp listening to tcp + ws listening | default_reuse_tcp_with_ws | rust | nervosnetwork/ckb | util/app-config/src/configs/network.rs | https://github.com/nervosnetwork/ckb/blob/master/util/app-config/src/configs/network.rs | MIT |
pub fn new(inner: TokioHandle, guard: Option<Sender<()>>) -> Self {
Self { inner, guard }
} | Create a new Handle | new | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn drop_guard(&mut self) {
let _ = self.guard.take();
} | Drop the guard | drop_guard | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn enter<F, R>(&self, f: F) -> R
where
F: FnOnce() -> R,
{
let _enter = self.inner.enter();
f()
} | Enter the runtime context. This allows you to construct types that must
have an executor available on creation such as [`tokio::time::Sleep`] or [`tokio::net::TcpStream`].
It will also allow you to call methods such as [`tokio::spawn`]. | enter | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn spawn<F>(&self, future: F) -> JoinHandle<F::Output>
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
let tokio_task_guard = self.guard.clone();
self.inner.spawn(async move {
// move tokio_task_guard into the spawned future
// so that it will be dropped when the future is finished
let _guard = tokio_task_guard;
future.await
})
} | Spawns a future onto the runtime.
This spawns the given future onto the runtime's executor | spawn | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn block_on<F: Future>(&self, future: F) -> F::Output {
self.inner.block_on(future)
} | Run a future to completion on the Tokio runtime from a synchronous context. | block_on | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn spawn_blocking<F, R>(&self, f: F) -> JoinHandle<R>
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
self.inner.spawn_blocking(f)
} | Spawns a future onto the runtime blocking pool.
This spawns the given future onto the runtime's blocking executor | spawn_blocking | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn into_inner(self) -> TokioHandle {
self.inner
} | Transform to inner tokio handler | into_inner | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
fn new_runtime(worker_num: Option<usize>) -> Runtime {
Builder::new_multi_thread()
.enable_all()
.worker_threads(worker_num.unwrap_or_else(|| available_parallelism().unwrap().into()))
.thread_name_fn(|| {
static ATOMIC_ID: AtomicU32 = AtomicU32::new(0);
let id = ATOMIC_ID
.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |n| {
// A long thread name will cut to 15 characters in debug tools.
// Such as "top", "htop", "gdb" and so on.
// It's a kernel limit.
//
// So if we want to see the whole name in debug tools,
// this number should have 6 digits at most,
// since the prefix uses 9 characters in below code.
//
// There still has a issue:
// When id wraps around, we couldn't know whether the old id
// is released or not.
// But we can ignore this, because it's almost impossible.
if n >= 999_999 { Some(0) } else { Some(n + 1) }
})
.expect("impossible since the above closure must return Some(number)");
format!("GlobalRt-{id}")
})
.build()
.expect("ckb runtime initialized")
} | Create a new runtime with unique name. | new_runtime | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn new_global_runtime(worker_num: Option<usize>) -> (Handle, Receiver<()>, Runtime) {
let runtime = new_runtime(worker_num);
let handle = runtime.handle().clone();
let (guard, handle_stop_rx): (Sender<()>, Receiver<()>) = tokio::sync::mpsc::channel::<()>(1);
(Handle::new(handle, Some(guard)), handle_stop_rx, runtime)
} | Create new threaded_scheduler tokio Runtime, return `Runtime` | new_global_runtime | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn new_background_runtime() -> Handle {
let runtime = new_runtime(None);
let handle = runtime.handle().clone();
let (guard, mut handle_stop_rx): (Sender<()>, Receiver<()>) =
tokio::sync::mpsc::channel::<()>(1);
let _thread = std::thread::Builder::new()
.name("GlobalRtBuilder".to_string())
.spawn(move || {
let ret = runtime.block_on(async move { handle_stop_rx.recv().await });
ckb_logger::debug!("Global runtime finished {:?}", ret);
})
.expect("tokio runtime started");
Handle::new(handle, Some(guard))
} | Create new threaded_scheduler tokio Runtime, return `Handle` and background thread join handle,
NOTICE: This is only used in testing | new_background_runtime | rust | nervosnetwork/ckb | util/runtime/src/native.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/native.rs | MIT |
pub fn spawn<F>(&self, future: F)
where
F: Future<Output = ()> + 'static,
{
spawn_local(async move { future.await })
}
}
impl Spawn for Handle {
fn spawn_task<F>(&self, future: F)
where
F: Future<Output = ()> + 'static,
{
self.spawn(future);
} | Spawns a future onto the runtime.
This spawns the given future onto the runtime's executor | spawn | rust | nervosnetwork/ckb | util/runtime/src/browser.rs | https://github.com/nervosnetwork/ckb/blob/master/util/runtime/src/browser.rs | MIT |
pub fn run_app(version: Version) -> Result<(), ExitCode> {
// Always print backtrace on panic.
unsafe {
::std::env::set_var("RUST_BACKTRACE", "full");
}
let (bin_name, app_matches) = cli::get_bin_name_and_matches(&version);
if let Some((cli, matches)) = app_matches.subcommand() {
match cli {
cli::CMD_INIT => {
return subcommand::init(Setup::init(matches)?);
}
cli::CMD_LIST_HASHES => {
return subcommand::list_hashes(Setup::root_dir_from_matches(matches)?, matches);
}
cli::CMD_PEERID => {
if let Some((cli, matches)) = matches.subcommand() {
match cli {
cli::CMD_GEN_SECRET => return Setup::generate(matches),
cli::CMD_FROM_SECRET => {
return subcommand::peer_id(Setup::peer_id(matches)?);
}
_ => {}
}
}
}
_ => {}
}
}
let (cmd, matches) = app_matches
.subcommand()
.expect("SubcommandRequiredElseHelp");
#[cfg(not(target_os = "windows"))]
if run_daemon(cmd, matches) {
return run_app_in_daemon(version, bin_name, cmd, matches);
}
debug!("ckb version: {}", version);
run_app_inner(version, bin_name, cmd, matches)
} | The executable main entry.
It returns `Ok` when the process exist normally, otherwise the `ExitCode` is converted to the
process exit status code.
## Parameters
* `version` - The version is passed in so the bin crate can collect the version without trigger
re-linking. | run_app | rust | nervosnetwork/ckb | ckb-bin/src/lib.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/lib.rs | MIT |
pub fn raise_fd_limit() {
if let Some(limit) = fdlimit::raise_fd_limit() {
debug!("raise_fd_limit newly-increased limit: {}", limit);
}
} | Raise the soft open file descriptor resource limit to the hard resource
limit.
# Panics
Panics if [`libc::getrlimit`], [`libc::setrlimit`], [`libc::sysctl`], [`libc::getrlimit`] or [`libc::setrlimit`]
fail.
darwin_fd_limit exists to work around an issue where launchctl on Mac OS X
defaults the rlimit maxfiles to 256/unlimited. The default soft limit of 256
ends up being far too low for our multithreaded scheduler testing, depending
on the number of cores available. | raise_fd_limit | rust | nervosnetwork/ckb | ckb-bin/src/helper.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/helper.rs | MIT |
pub fn basic_app() -> Command {
let command = Command::new(BIN_NAME)
.author("Nervos Core Dev <[email protected]>")
.about("Nervos CKB - The Common Knowledge Base")
.subcommand_required(true)
.arg_required_else_help(true)
.term_width(110)
.arg(
Arg::new(ARG_CONFIG_DIR)
.global(true)
.short('C')
.value_name("path")
.action(clap::ArgAction::Set)
.help(
"Run as if CKB was started in <path>, instead of the current working directory.",
),
)
.subcommand(run())
.subcommand(miner())
.subcommand(export())
.subcommand(import())
.subcommand(list_hashes())
.subcommand(init())
.subcommand(replay())
.subcommand(stats())
.subcommand(reset_data())
.subcommand(peer_id())
.subcommand(migrate());
#[cfg(not(target_os = "windows"))]
let command = command.subcommand(daemon());
command
} | return root clap Command | basic_app | rust | nervosnetwork/ckb | ckb-bin/src/cli.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/cli.rs | MIT |
pub fn get_bin_name_and_matches(version: &Version) -> (String, ArgMatches) {
let bin_name = std::env::args()
.next()
.unwrap_or_else(|| BIN_NAME.to_owned());
let matches = basic_app()
.version(version.short())
.long_version(version.long())
.get_matches();
(bin_name, matches)
} | Parse the command line arguments by supplying the version information.
The version is used to generate the help message and output for `--version`. | get_bin_name_and_matches | rust | nervosnetwork/ckb | ckb-bin/src/cli.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/cli.rs | MIT |
pub fn from_matches(
bin_name: String,
subcommand_name: &str,
matches: &ArgMatches,
) -> Result<Setup, ExitCode> {
let root_dir = Self::root_dir_from_matches(matches)?;
let mut config = AppConfig::load_for_subcommand(root_dir, subcommand_name)?;
config.set_bin_name(bin_name);
#[cfg(feature = "with_sentry")]
let is_sentry_enabled = is_daemon(subcommand_name) && config.sentry().is_enabled();
Ok(Setup {
#[cfg(feature = "with_sentry")]
subcommand_name: subcommand_name.to_string(),
config,
#[cfg(feature = "with_sentry")]
is_sentry_enabled,
})
} | Boots the ckb process by parsing the command line arguments and loading the config file. | from_matches | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn run(self, matches: &ArgMatches) -> Result<RunArgs, ExitCode> {
let consensus = self.consensus()?;
let chain_spec_hash = self.chain_spec()?.hash;
let mut config = self.config.into_ckb()?;
let mainnet_genesis = ckb_chain_spec::ChainSpec::load_from(
&ckb_resource::Resource::bundled("specs/mainnet.toml".to_string()),
)
.expect("load mainnet spec fail")
.build_genesis()
.expect("build mainnet genesis fail");
config.network.sync.min_chain_work =
if consensus.genesis_block.hash() == mainnet_genesis.hash() {
MIN_CHAIN_WORK_500K
} else {
u256!("0x0")
};
config.network.sync.assume_valid_targets = matches
.get_one::<String>(cli::ARG_ASSUME_VALID_TARGET)
.map(|concacate_targets| {
concacate_targets
.split(',')
.map(|s| H256::from_str(&s[2..]))
.collect::<Result<Vec<H256>, _>>()
.map_err(|err| {
error!("Invalid assume valid target: {}", err);
ExitCode::Cli
})
})
.transpose()?; // Converts Result<Option<T>, E> to Option<Result<T, E>>
if config.network.sync.assume_valid_targets.is_none() {
config.network.sync.assume_valid_targets = match consensus.id.as_str() {
ckb_constant::hardfork::mainnet::CHAIN_SPEC_NAME => Some(
ckb_constant::default_assume_valid_target::mainnet::default_assume_valid_targets().iter().map(|target|
H256::from_str(&target[2..]).expect("default assume_valid_target for mainnet must be valid")).collect::<Vec<H256>>()),
ckb_constant::hardfork::testnet::CHAIN_SPEC_NAME => Some(
ckb_constant::default_assume_valid_target::testnet::default_assume_valid_targets().iter().map(|target|
H256::from_str(&target[2..]).expect("default assume_valid_target for testnet must be valid")).collect::<Vec<H256>>()),
_ => None,
};
}
if let Some(ref assume_valid_targets) = config.network.sync.assume_valid_targets {
if let Some(first_target) = assume_valid_targets.first() {
if assume_valid_targets.len() == 1 {
if first_target
== &H256::from_slice(&[0; 32]).expect("must parse Zero h256 successful")
{
info!("Disable assume valid targets since assume_valid_targets is zero");
config.network.sync.assume_valid_targets = None;
} else {
info!(
"assume_valid_targets set to {:?}",
config.network.sync.assume_valid_targets
);
}
}
}
}
Ok(RunArgs {
config,
consensus,
block_assembler_advanced: matches.get_flag(cli::ARG_BA_ADVANCED),
skip_chain_spec_check: matches.get_flag(cli::ARG_SKIP_CHAIN_SPEC_CHECK),
overwrite_chain_spec: matches.get_flag(cli::ARG_OVERWRITE_CHAIN_SPEC),
chain_spec_hash,
indexer: matches.get_flag(cli::ARG_INDEXER),
rich_indexer: matches.get_flag(cli::ARG_RICH_INDEXER),
#[cfg(not(target_os = "windows"))]
daemon: matches.get_flag(cli::ARG_DAEMON),
})
} | Executes `ckb run`. | run | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn migrate(self, matches: &ArgMatches) -> Result<MigrateArgs, ExitCode> {
let consensus = self.consensus()?;
let config = self.config.into_ckb()?;
let check = matches.get_flag(cli::ARG_MIGRATE_CHECK);
let force = matches.get_flag(cli::ARG_FORCE);
let include_background = matches.get_flag(cli::ARG_INCLUDE_BACKGROUND);
Ok(MigrateArgs {
config,
consensus,
check,
force,
include_background,
})
} | `migrate` subcommand has one `flags` arg, trigger this arg with "--check" | migrate | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn miner(self, matches: &ArgMatches) -> Result<MinerArgs, ExitCode> {
let spec = self.chain_spec()?;
let memory_tracker = self.config.memory_tracker().to_owned();
let config = self.config.into_miner()?;
let pow_engine = spec.pow_engine();
let limit = *matches
.get_one::<u128>(cli::ARG_LIMIT)
.expect("has default value");
Ok(MinerArgs {
pow_engine,
config: config.miner,
memory_tracker,
limit,
})
} | Executes `ckb miner`. | miner | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn replay(self, matches: &ArgMatches) -> Result<ReplayArgs, ExitCode> {
let consensus = self.consensus()?;
let config = self.config.into_ckb()?;
let tmp_target = matches
.get_one::<PathBuf>(cli::ARG_TMP_TARGET)
.ok_or_else(|| {
eprintln!("Args Error: {:?} no found", cli::ARG_TMP_TARGET);
ExitCode::Cli
})?
.clone();
let profile = if matches.get_flag(cli::ARG_PROFILE) {
let from = matches.get_one::<u64>(cli::ARG_FROM).cloned();
let to = matches.get_one::<u64>(cli::ARG_TO).cloned();
Some((from, to))
} else {
None
};
let sanity_check = matches.get_flag(cli::ARG_SANITY_CHECK);
let full_verification = matches.get_flag(cli::ARG_FULL_VERIFICATION);
Ok(ReplayArgs {
config,
consensus,
tmp_target,
profile,
sanity_check,
full_verification,
})
} | Executes `ckb replay`. | replay | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn stats(self, matches: &ArgMatches) -> Result<StatsArgs, ExitCode> {
let consensus = self.consensus()?;
let config = self.config.into_ckb()?;
let from = matches.get_one::<u64>(cli::ARG_FROM).cloned();
let to = matches.get_one::<u64>(cli::ARG_TO).cloned();
Ok(StatsArgs {
config,
consensus,
from,
to,
})
} | Executes `ckb stats`. | stats | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn import(self, matches: &ArgMatches) -> Result<ImportArgs, ExitCode> {
let consensus = self.consensus()?;
let config = self.config.into_ckb()?;
let source = matches
.get_one::<PathBuf>(cli::ARG_SOURCE)
.ok_or_else(|| {
eprintln!("Args Error: {:?} no found", cli::ARG_SOURCE);
ExitCode::Cli
})?
.clone();
Ok(ImportArgs {
config,
consensus,
source,
})
} | Executes `ckb import`. | import | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn export(self, matches: &ArgMatches) -> Result<ExportArgs, ExitCode> {
let consensus = self.consensus()?;
let config = self.config.into_ckb()?;
let target = matches
.get_one::<PathBuf>(cli::ARG_TARGET)
.ok_or_else(|| {
eprintln!("Args Error: {:?} no found", cli::ARG_TARGET);
ExitCode::Cli
})?
.clone();
Ok(ExportArgs {
config,
consensus,
target,
})
} | Executes `ckb export`. | export | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn init(matches: &ArgMatches) -> Result<InitArgs, ExitCode> {
if matches.contains_id("list-specs") {
eprintln!(
"Deprecated: Option `--list-specs` is deprecated, use `--list-chains` instead"
);
}
if matches.contains_id("spec") {
eprintln!("Deprecated: Option `--spec` is deprecated, use `--chain` instead");
}
if matches.contains_id("export-specs") {
eprintln!("Deprecated: Option `--export-specs` is deprecated");
}
let root_dir = Self::root_dir_from_matches(matches)?;
let list_chains =
matches.get_flag(cli::ARG_LIST_CHAINS) || matches.contains_id("list-specs");
let interactive = matches.get_flag(cli::ARG_INTERACTIVE);
let force = matches.get_flag(cli::ARG_FORCE);
let chain = if !matches.contains_id("spec") {
matches
.get_one::<String>(cli::ARG_CHAIN)
.expect("has default value")
.to_string()
} else {
matches.get_one::<String>("spec").unwrap().to_string()
};
let rpc_port = matches
.get_one::<String>(cli::ARG_RPC_PORT)
.expect("has default value")
.to_string();
let p2p_port = matches
.get_one::<String>(cli::ARG_P2P_PORT)
.expect("has default value")
.to_string();
let (log_to_file, log_to_stdout) = match matches
.get_one::<String>(cli::ARG_LOG_TO)
.map(|s| s.as_str())
{
Some("file") => (true, false),
Some("stdout") => (false, true),
Some("both") => (true, true),
_ => unreachable!(),
};
let block_assembler_code_hash = matches.get_one::<String>(cli::ARG_BA_CODE_HASH).cloned();
let block_assembler_args: Vec<_> = matches
.get_many::<String>(cli::ARG_BA_ARG)
.unwrap_or_default()
.map(|a| a.to_owned())
.collect();
let block_assembler_hash_type = matches
.get_one::<String>(cli::ARG_BA_HASH_TYPE)
.and_then(|hash_type| serde_plain::from_str::<ScriptHashType>(hash_type).ok())
.expect("has default value");
let block_assembler_message = matches.get_one::<String>(cli::ARG_BA_MESSAGE).cloned();
let import_spec = matches.get_one::<String>(cli::ARG_IMPORT_SPEC).cloned();
let customize_spec = {
let genesis_message = matches.get_one::<String>(cli::ARG_GENESIS_MESSAGE).cloned();
CustomizeSpec { genesis_message }
};
Ok(InitArgs {
interactive,
root_dir,
chain,
rpc_port,
p2p_port,
list_chains,
force,
log_to_file,
log_to_stdout,
block_assembler_code_hash,
block_assembler_args,
block_assembler_hash_type,
block_assembler_message,
import_spec,
customize_spec,
})
} | Executes `ckb init`. | init | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn reset_data(self, matches: &ArgMatches) -> Result<ResetDataArgs, ExitCode> {
let config = self.config.into_ckb()?;
let data_dir = config.data_dir;
let db_path = config.db.path;
let indexer_path = config.indexer.store;
let rich_indexer_path = config
.indexer
.rich_indexer
.store
.parent()
.expect("rich-indexer store path should have parent dir")
.to_path_buf();
let network_config = config.network;
let network_dir = network_config.path.clone();
let network_peer_store_path = network_config.peer_store_path();
let network_secret_key_path = network_config.secret_key_path();
let logs_dir = Some(config.logger.log_dir);
let force = matches.get_flag(cli::ARG_FORCE);
let all = matches.get_flag(cli::ARG_ALL);
let database = matches.get_flag(cli::ARG_DATABASE);
let indexer = matches.get_flag(cli::ARG_INDEXER);
let rich_indexer = matches.get_flag(cli::ARG_RICH_INDEXER);
let network = matches.get_flag(cli::ARG_NETWORK);
let network_peer_store = matches.get_flag(cli::ARG_NETWORK_PEER_STORE);
let network_secret_key = matches.get_flag(cli::ARG_NETWORK_SECRET_KEY);
let logs = matches.get_flag(cli::ARG_LOGS);
Ok(ResetDataArgs {
force,
all,
database,
indexer,
rich_indexer,
network,
network_peer_store,
network_secret_key,
logs,
data_dir,
db_path,
indexer_path,
rich_indexer_path,
network_dir,
network_peer_store_path,
network_secret_key_path,
logs_dir,
})
} | Executes `ckb reset-data`. | reset_data | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn root_dir_from_matches(matches: &ArgMatches) -> Result<PathBuf, ExitCode> {
let config_dir = match matches.get_one::<String>(cli::ARG_CONFIG_DIR) {
Some(arg_config_dir) => PathBuf::from(arg_config_dir),
None => ::std::env::current_dir()?,
};
std::fs::create_dir_all(&config_dir)?;
Ok(config_dir)
} | Resolves the root directory for ckb from the command line arguments. | root_dir_from_matches | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn peer_id(matches: &ArgMatches) -> Result<PeerIDArgs, ExitCode> {
let path = matches
.get_one::<String>(cli::ARG_SECRET_PATH)
.expect("required on command line");
match read_secret_key(path.into()) {
Ok(Some(key)) => Ok(PeerIDArgs {
peer_id: key.peer_id(),
}),
Err(_) => Err(ExitCode::Failure),
Ok(None) => Err(ExitCode::IO),
}
} | Gets the network peer id by reading the network secret key. | peer_id | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
pub fn generate(matches: &ArgMatches) -> Result<(), ExitCode> {
let path = matches
.get_one::<String>(cli::ARG_SECRET_PATH)
.expect("required on command line");
write_secret_to_file(&generate_random_key(), path.into()).map_err(|_| ExitCode::IO)
} | Generates the network secret key. | generate | rust | nervosnetwork/ckb | ckb-bin/src/setup.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/setup.rs | MIT |
fn h256_as_validator() {
let ok_matches = basic_app().try_get_matches_from([
BIN_NAME,
"init",
"--ba-code-hash",
"0x00d1b86f6824d33a91b72ec20e2118cf7788a5ffff656bd1ea1ea638c764cb5f",
"--ba-arg",
"0x00",
]);
assert!(ok_matches.is_ok());
let err_matches = basic_app().try_get_matches_from([
BIN_NAME,
"init",
"--ba-code-hash",
"0xd1b86f6824d33a91b72ec20e2118cf7788a5ffff656bd1ea1ea638c764cb5f",
"--ba-arg",
"0x00",
]);
let err = err_matches.err().unwrap();
assert_eq!(clap::error::ErrorKind::ValueValidation, err.kind());
let err_matches = basic_app().try_get_matches_from([
BIN_NAME,
"init",
"--ba-code-hash",
"0x4630c0",
"--ba-arg",
"0x00",
]);
let err = err_matches.err().unwrap();
assert_eq!(clap::error::ErrorKind::ValueValidation, err.kind());
let ok_matches = basic_app().try_get_matches_from([
BIN_NAME,
"run",
"--assume-valid-target",
"0x94a4e93601f7295501891764880d37e9fcf886d02bf64b3d06f9137db8fa981e",
]);
assert!(ok_matches.is_ok());
let err_matches = basic_app().try_get_matches_from([
BIN_NAME,
"run",
"--assume-valid-target",
"0x94a4e93601f729550",
]);
let err = err_matches.err().unwrap();
assert_eq!(clap::error::ErrorKind::ValueValidation, err.kind());
} | 2 cases in which use h256 validator:
ckb init --ba-code-hash
ckb run --assume-valid-target
not for `ckb init --ba-arg` && `ckb init --ba-message` | h256_as_validator | rust | nervosnetwork/ckb | ckb-bin/src/tests/cli.rs | https://github.com/nervosnetwork/ckb/blob/master/ckb-bin/src/tests/cli.rs | MIT |
pub fn spawn_freeze(&self) -> Option<FreezerClose> {
if let Some(freezer) = self.store.freezer() {
ckb_logger::info!("Freezer enabled");
let signal_receiver = new_crossbeam_exit_rx();
let shared = self.clone();
let freeze_jh = thread::Builder::new()
.spawn(move || {
loop {
match signal_receiver.recv_timeout(FREEZER_INTERVAL) {
Err(_) => {
if let Err(e) = shared.freeze() {
ckb_logger::error!("Freezer error {}", e);
break;
}
}
Ok(_) => {
ckb_logger::info!("Freezer closing");
break;
}
}
}
})
.expect("Start FreezerService failed");
register_thread("freeze", freeze_jh);
return Some(FreezerClose {
stopped: Arc::clone(&freezer.stopped),
});
}
None
} | Spawn freeze background thread that periodically checks and moves ancient data from the kv database into the freezer. | spawn_freeze | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn tx_pool_controller(&self) -> &TxPoolController {
&self.tx_pool_controller
} | TODO(doc): @quake | tx_pool_controller | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn txs_verify_cache(&self) -> Arc<TokioRwLock<TxVerificationCache>> {
Arc::clone(&self.txs_verify_cache)
} | TODO(doc): @quake | txs_verify_cache | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn notify_controller(&self) -> &NotifyController {
&self.notify_controller
} | TODO(doc): @quake | notify_controller | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn snapshot(&self) -> Guard<Arc<Snapshot>> {
self.snapshot_mgr.load()
} | TODO(doc): @quake | snapshot | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn cloned_snapshot(&self) -> Arc<Snapshot> {
Arc::clone(&self.snapshot())
} | Return arc cloned snapshot | cloned_snapshot | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn store_snapshot(&self, snapshot: Arc<Snapshot>) {
self.snapshot_mgr.store(snapshot)
} | TODO(doc): @quake | store_snapshot | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn refresh_snapshot(&self) {
let new = self.snapshot().refresh(self.store.get_snapshot());
self.store_snapshot(Arc::new(new));
} | TODO(doc): @quake | refresh_snapshot | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn new_snapshot(
&self,
tip_header: HeaderView,
total_difficulty: U256,
epoch_ext: EpochExt,
proposals: ProposalView,
) -> Arc<Snapshot> {
Arc::new(Snapshot::new(
tip_header,
total_difficulty,
epoch_ext,
self.store.get_snapshot(),
proposals,
Arc::clone(&self.consensus),
))
} | TODO(doc): @quake | new_snapshot | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn consensus(&self) -> &Consensus {
&self.consensus
} | TODO(doc): @quake | consensus | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn cloned_consensus(&self) -> Arc<Consensus> {
Arc::clone(&self.consensus)
} | Return arc cloned consensus re | cloned_consensus | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn async_handle(&self) -> &Handle {
&self.async_handle
} | Return async runtime handle | async_handle | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn genesis_hash(&self) -> Byte32 {
self.consensus.genesis_hash()
} | TODO(doc): @quake | genesis_hash | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn store(&self) -> &ChainDB {
&self.store
} | TODO(doc): @quake | store | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn is_initial_block_download(&self) -> bool {
// Once this function has returned false, it must remain false.
if self.ibd_finished.load(Ordering::Acquire) {
false
} else if unix_time_as_millis().saturating_sub(self.snapshot().tip_header().timestamp())
> MAX_TIP_AGE
{
true
} else {
self.ibd_finished.store(true, Ordering::Release);
false
}
} | Return whether chain is in initial block download | is_initial_block_download | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn get_block_template(
&self,
bytes_limit: Option<u64>,
proposals_limit: Option<u64>,
max_version: Option<Version>,
) -> Result<Result<BlockTemplate, AnyError>, AnyError> {
self.tx_pool_controller()
.get_block_template(bytes_limit, proposals_limit, max_version)
} | Generate and return block_template | get_block_template | rust | nervosnetwork/ckb | shared/src/shared.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared.rs | MIT |
pub fn open_or_create_db(
bin_name: &str,
root_dir: &Path,
config: &DBConfig,
hardforks: HardForks,
) -> Result<RocksDB, ExitCode> {
let migrate = Migrate::new(&config.path, hardforks);
let read_only_db = migrate.open_read_only_db().map_err(|e| {
eprintln!("Migration error {e}");
ExitCode::Failure
})?;
if let Some(db) = read_only_db {
match migrate.check(&db, true) {
Ordering::Greater => {
eprintln!(
"The database was created by a higher version CKB executable binary \n\
and cannot be opened by the current binary.\n\
Please download the latest CKB executable binary."
);
Err(ExitCode::Failure)
}
Ordering::Equal => Ok(RocksDB::open(config, COLUMNS)),
Ordering::Less => {
let can_run_in_background = migrate.can_run_in_background(&db);
if migrate.require_expensive(&db, false) && !can_run_in_background {
eprintln!(
"For optimal performance, CKB recommends migrating your data into a new format.\n\
If you prefer to stick with the older version, \n\
it's important to note that they may have unfixed vulnerabilities.\n\
Before migrating, we strongly recommend backuping your data directory.\n\
To migrate, run `\"{}\" migrate -C \"{}\"` and confirm by typing \"YES\".",
bin_name,
root_dir.display()
);
Err(ExitCode::Failure)
} else if can_run_in_background {
info!("process migrations in background ...");
let db = RocksDB::open(config, COLUMNS);
migrate.migrate(db.clone(), true).map_err(|err| {
eprintln!("Run error: {err:?}");
ExitCode::Failure
})?;
Ok(db)
} else {
info!("Processing fast migrations ...");
let bulk_load_db_db = migrate.open_bulk_load_db().map_err(|e| {
eprintln!("Migration error {e}");
ExitCode::Failure
})?;
if let Some(db) = bulk_load_db_db {
migrate.migrate(db, false).map_err(|err| {
eprintln!("Run error: {err:?}");
ExitCode::Failure
})?;
}
Ok(RocksDB::open(config, COLUMNS))
}
}
}
} else {
let db = RocksDB::open(config, COLUMNS);
migrate.init_db_version(&db).map_err(|e| {
eprintln!("Migrate init_db_version error {e}");
ExitCode::Failure
})?;
Ok(db)
}
} | Open or create a rocksdb | open_or_create_db | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn new(
bin_name: &str,
root_dir: &Path,
db_config: &DBConfig,
ancient: Option<PathBuf>,
async_handle: Handle,
consensus: Consensus,
) -> Result<SharedBuilder, ExitCode> {
let db = open_or_create_db(
bin_name,
root_dir,
db_config,
consensus.hardfork_switch.clone(),
)?;
Ok(SharedBuilder {
db,
ancient_path: ancient,
consensus,
tx_pool_config: None,
notify_config: None,
store_config: None,
sync_config: None,
block_assembler_config: None,
async_handle,
fee_estimator_config: None,
header_map_tmp_dir: None,
})
} | Generates the base SharedBuilder with ancient path and async_handle | new | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn with_temp_db() -> Self {
use std::{
borrow::Borrow,
sync::atomic::{AtomicUsize, Ordering},
};
// once #[thread_local] is stable
// #[thread_local]
// static RUNTIME_HANDLE: unsync::OnceCell<...
thread_local! {
// NOTICE:we can't put the runtime directly into thread_local here,
// on windows the runtime in thread_local will get stuck when dropping
static RUNTIME_HANDLE: std::cell::OnceCell<Handle> = const { std::cell::OnceCell::new() };
}
static DB_COUNT: AtomicUsize = AtomicUsize::new(0);
static TMP_DIR: std::sync::OnceLock<TempDir> = std::sync::OnceLock::new();
let db = {
let db_id = DB_COUNT.fetch_add(1, Ordering::SeqCst);
let db_base_dir = TMP_DIR
.borrow()
.get_or_init(|| TempDir::new().unwrap())
.path()
.to_path_buf();
let db_dir = db_base_dir.join(format!("db_{db_id}"));
RocksDB::open_in(db_dir, COLUMNS)
};
RUNTIME_HANDLE.with(|runtime| SharedBuilder {
db,
ancient_path: None,
consensus: Consensus::default(),
tx_pool_config: None,
notify_config: None,
store_config: None,
sync_config: None,
block_assembler_config: None,
async_handle: runtime.get_or_init(new_background_runtime).clone(),
fee_estimator_config: None,
header_map_tmp_dir: None,
})
} | Generates the SharedBuilder with temp db
NOTICE: this is only used in testing | with_temp_db | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn consensus(mut self, value: Consensus) -> Self {
self.consensus = value;
self
} | TODO(doc): @quake | consensus | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn tx_pool_config(mut self, config: TxPoolConfig) -> Self {
self.tx_pool_config = Some(config);
self
} | TODO(doc): @quake | tx_pool_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn notify_config(mut self, config: NotifyConfig) -> Self {
self.notify_config = Some(config);
self
} | TODO(doc): @quake | notify_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn store_config(mut self, config: StoreConfig) -> Self {
self.store_config = Some(config);
self
} | TODO(doc): @quake | store_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn sync_config(mut self, config: SyncConfig) -> Self {
self.sync_config = Some(config);
self
} | TODO(doc): @eval-exec | sync_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn header_map_tmp_dir(mut self, header_map_tmp_dir: Option<PathBuf>) -> Self {
self.header_map_tmp_dir = header_map_tmp_dir;
self
} | TODO(doc): @eval-exec | header_map_tmp_dir | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn block_assembler_config(mut self, config: Option<BlockAssemblerConfig>) -> Self {
self.block_assembler_config = config;
self
} | TODO(doc): @quake | block_assembler_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn fee_estimator_config(mut self, config: FeeEstimatorConfig) -> Self {
self.fee_estimator_config = Some(config);
self
} | Sets the configuration for the fee estimator. | fee_estimator_config | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn async_handle(mut self, async_handle: Handle) -> Self {
self.async_handle = async_handle;
self
} | specifies the async_handle for the shared | async_handle | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn build(self) -> Result<(Shared, SharedPackage), ExitCode> {
let SharedBuilder {
db,
ancient_path,
consensus,
tx_pool_config,
store_config,
sync_config,
block_assembler_config,
notify_config,
async_handle,
fee_estimator_config,
header_map_tmp_dir,
} = self;
let tx_pool_config = tx_pool_config.unwrap_or_default();
let notify_config = notify_config.unwrap_or_default();
let store_config = store_config.unwrap_or_default();
let sync_config = sync_config.unwrap_or_default();
let consensus = Arc::new(consensus);
let header_map_memory_limit = sync_config.header_map.memory_limit.as_u64() as usize;
let ibd_finished = Arc::new(AtomicBool::new(false));
let header_map = Arc::new(HeaderMap::new(
header_map_tmp_dir,
header_map_memory_limit,
&async_handle,
Arc::clone(&ibd_finished),
));
let notify_controller = start_notify_service(notify_config, async_handle.clone());
let store = build_store(db, store_config, ancient_path).map_err(|e| {
eprintln!("build_store {e}");
ExitCode::Failure
})?;
let txs_verify_cache = Arc::new(TokioRwLock::new(init_cache()));
let (snapshot, table) =
Self::init_snapshot(&store, Arc::clone(&consensus)).map_err(|e| {
eprintln!("init_snapshot {e}");
ExitCode::Failure
})?;
let snapshot = Arc::new(snapshot);
let snapshot_mgr = Arc::new(SnapshotMgr::new(Arc::clone(&snapshot)));
let (sender, receiver) = ckb_channel::unbounded();
let fee_estimator_algo = fee_estimator_config
.map(|config| config.algorithm)
.unwrap_or(None);
let fee_estimator = match fee_estimator_algo {
Some(FeeEstimatorAlgo::WeightUnitsFlow) => FeeEstimator::new_weight_units_flow(),
Some(FeeEstimatorAlgo::ConfirmationFraction) => {
FeeEstimator::new_confirmation_fraction()
}
None => FeeEstimator::new_dummy(),
};
let (mut tx_pool_builder, tx_pool_controller) = TxPoolServiceBuilder::new(
tx_pool_config,
Arc::clone(&snapshot),
block_assembler_config,
Arc::clone(&txs_verify_cache),
&async_handle,
sender,
fee_estimator.clone(),
);
register_tx_pool_callback(
&mut tx_pool_builder,
notify_controller.clone(),
fee_estimator,
);
let block_status_map = Arc::new(DashMap::new());
let assume_valid_targets = Arc::new(Mutex::new({
let not_exists_targets: Option<Vec<H256>> =
sync_config.assume_valid_targets.clone().map(|targets| {
targets
.iter()
.filter(|&target_hash| {
let exists = snapshot.block_exists(&target_hash.pack());
if exists {
info!("assume-valid target 0x{} exists in local db", target_hash);
}
!exists
})
.cloned()
.collect::<Vec<H256>>()
});
if not_exists_targets
.as_ref()
.is_some_and(|targets| targets.is_empty())
{
info!("all assume-valid targets synchronized, enter full verification mode");
None
} else {
not_exists_targets
}
}));
let assume_valid_target_specified: Arc<Option<H256>> = Arc::new(
sync_config
.assume_valid_targets
.and_then(|targets| targets.last().cloned()),
);
let shared = Shared::new(
store,
tx_pool_controller,
notify_controller,
txs_verify_cache,
consensus,
snapshot_mgr,
async_handle,
ibd_finished,
assume_valid_targets,
assume_valid_target_specified,
header_map,
block_status_map,
);
let chain_services_builder = ChainServicesBuilder::new(shared.clone(), table);
let pack = SharedPackage {
chain_services_builder: Some(chain_services_builder),
tx_pool_builder: Some(tx_pool_builder),
relay_tx_receiver: Some(receiver),
};
Ok((shared, pack))
} | TODO(doc): @quake | build | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn take_chain_services_builder(&mut self) -> ChainServicesBuilder {
self.chain_services_builder
.take()
.expect("take chain_services_builder")
} | Takes the chain_services_builder out of the package, leaving a None in its place. | take_chain_services_builder | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn take_tx_pool_builder(&mut self) -> TxPoolServiceBuilder {
self.tx_pool_builder.take().expect("take tx_pool_builder")
} | Takes the tx_pool_builder out of the package, leaving a None in its place. | take_tx_pool_builder | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn take_relay_tx_receiver(&mut self) -> Receiver<TxVerificationResult> {
self.relay_tx_receiver
.take()
.expect("take relay_tx_receiver")
} | Takes the relay_tx_receiver out of the package, leaving a None in its place. | take_relay_tx_receiver | rust | nervosnetwork/ckb | shared/src/shared_builder.rs | https://github.com/nervosnetwork/ckb/blob/master/shared/src/shared_builder.rs | MIT |
pub fn to_equivalent_output(&self) -> CellOutput {
CellOutput::new_builder()
.lock(self.lock())
.capacity(self.capacity().pack())
.build()
} | Return a `CellOutput` with the equivalent capacity to the original TXO | to_equivalent_output | rust | nervosnetwork/ckb | test/src/txo.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/txo.rs | MIT |
pub fn to_minimal_output(&self) -> CellOutput {
CellOutput::new_builder()
.lock(self.lock())
.build_exact_capacity(Capacity::zero())
.unwrap()
} | Return a `CellOutput` with the minimal capacity | to_minimal_output | rust | nervosnetwork/ckb | test/src/txo.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/txo.rs | MIT |
pub fn bang_random_fee<C>(&self, cell_deps: C) -> Vec<TransactionView>
where
C: IntoIterator<Item = CellDep>,
{
let cell_deps: Vec<_> = cell_deps.into_iter().collect();
let mut rng = thread_rng();
self.iter()
.map(|txo| {
let maximal_capacity = txo.capacity();
let minimal_capacity: u64 = txo.to_minimal_output().capacity().unpack();
let actual_capacity = rng.gen_range(minimal_capacity..=maximal_capacity);
let output = txo
.to_equivalent_output()
.as_builder()
.capacity(actual_capacity.pack())
.build();
TransactionBuilder::default()
.cell_deps(cell_deps.clone())
.input(txo.to_input())
.output(output)
.output_data(Default::default())
.build()
})
.collect()
} | Construct transactions which convert the UTXO to another UTXO, given random fees | bang_random_fee | rust | nervosnetwork/ckb | test/src/txo.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/txo.rs | MIT |
pub fn temp_path(case_name: &str, suffix: &str) -> PathBuf {
let mut builder = tempfile::Builder::new();
let prefix = ["ckb-it", case_name, suffix, ""].join("-");
builder.prefix(&prefix);
let tempdir = if let Ok(val) = env::var("CKB_INTEGRATION_TEST_TMP") {
builder.tempdir_in(val)
} else {
builder.tempdir()
}
.expect("create tempdir failed");
let path = tempdir.path().to_owned();
tempdir.close().expect("close tempdir failed");
path
} | Return a random path located on temp_dir
We use `tempdir` only for generating a random path, and expect the corresponding directory
that `tempdir` creates be deleted when go out of this function. | temp_path | rust | nervosnetwork/ckb | test/src/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/utils.rs | MIT |
pub fn generate_utxo_set(node: &Node, n: usize) -> TXOSet {
// Ensure all the cellbases will be used later are already mature.
let cellbase_maturity = node.consensus().cellbase_maturity();
node.mine(cellbase_maturity.index());
// Explode these mature cellbases into multiple cells
let mut n_outputs = 0;
let mut txs = Vec::new();
while n > n_outputs {
node.mine(1);
let mature_number = node.get_tip_block_number() - cellbase_maturity.index();
let mature_block = node.get_block_by_number(mature_number);
let mature_cellbase = mature_block.transaction(0).unwrap();
if mature_cellbase.outputs().is_empty() {
continue;
}
let mature_utxos: TXOSet = TXOSet::from(&mature_cellbase);
let tx = mature_utxos.boom(vec![node.always_success_cell_dep()]);
n_outputs += tx.outputs().len();
txs.push(tx);
}
// Ensure all the transactions were committed
txs.iter().for_each(|tx| {
node.submit_transaction(tx);
});
while txs.iter().any(|tx| !is_transaction_committed(node, tx)) {
node.mine(node.consensus().finalization_delay_length());
}
let mut utxos = TXOSet::default();
txs.iter()
.for_each(|tx| utxos.extend(Into::<TXOSet>::into(tx)));
utxos.truncate(n);
node.wait_for_tx_pool();
utxos
} | Generate new blocks and explode these cellbases into `n` live cells | generate_utxo_set | rust | nervosnetwork/ckb | test/src/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/utils.rs | MIT |
pub fn commit(node: &Node, committed: &[&TransactionView]) -> BlockView {
let committed = committed
.iter()
.map(|t| t.to_owned().to_owned())
.collect::<Vec<_>>();
blank(node)
.as_advanced_builder()
.transactions(committed)
.build()
} | Return a blank block with additional committed transactions | commit | rust | nervosnetwork/ckb | test/src/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/utils.rs | MIT |
pub fn propose(node: &Node, proposals: &[&TransactionView]) -> BlockView {
let proposals = proposals.iter().map(|tx| tx.proposal_short_id());
blank(node)
.as_advanced_builder()
.proposals(proposals)
.build()
} | Return a blank block with additional proposed transactions | propose | rust | nervosnetwork/ckb | test/src/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/utils.rs | MIT |
pub fn blank(node: &Node) -> BlockView {
let example = node.new_block(None, None, None);
example
.as_advanced_builder()
.set_proposals(vec![])
.set_transactions(vec![example.transaction(0).unwrap()]) // cellbase
.set_uncles(vec![])
.build()
} | Return a block with `proposals = [], transactions = [cellbase], uncles = []` | blank | rust | nervosnetwork/ckb | test/src/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/utils.rs | MIT |
pub fn get_tip_tx_pool_info(&self) -> TxPoolInfo {
let tip_header = self.rpc_client().get_tip_header();
let tip_hash = &tip_header.hash;
let instant = Instant::now();
let mut recent = TxPoolInfo::default();
while instant.elapsed() < Duration::from_secs(10) {
let tx_pool_info = self.rpc_client().tx_pool_info();
if &tx_pool_info.tip_hash == tip_hash {
return tx_pool_info;
}
recent = tx_pool_info;
}
panic!(
"timeout to get_tip_tx_pool_info, tip_header={tip_header:?}, tx_pool_info: {recent:?}"
);
} | The states of chain and txpool are updated asynchronously. Which means that the chain has
updated to the newest tip but txpool not.
get_tip_tx_pool_info wait to ensure the txpool update to the newest tip as well. | get_tip_tx_pool_info | rust | nervosnetwork/ckb | test/src/node.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/node.rs | MIT |
pub fn start(self) -> JoinHandle<()> {
thread::spawn(move || {
let mut start_sequencial_task = false;
loop {
let msg = match self.inbox.try_recv() {
Ok(msg) => Some(msg),
Err(err) => {
if !err.is_empty() {
self.outbox.send(Notify::Stop).unwrap();
std::panic::panic_any(err)
}
None
}
};
// check command
match msg {
Some(Command::StartSequencial) => {
start_sequencial_task = true;
}
Some(Command::Shutdown) => {
self.outbox.send(Notify::Stop).unwrap();
return;
}
_ => {}
}
// pick a spec to run
let task = self.tasks.lock().pop();
match task {
Some(spec) => {
// if spec.name() is RandomlyKill or SyncChurn, then push it to sequencial_tasks
if SEQUENCIAL_TASKS.contains(&spec.name()) {
info!("push {} to sequencial_tasks", spec.name());
self.sequencial_tasks.lock().push(spec);
} else {
self.run_spec(spec.as_ref(), 0);
}
}
None => {
if self.sequencial_worker {
info!("sequencial worker is waiting for command");
if start_sequencial_task {
match self.sequencial_tasks.lock().pop() {
Some(spec) => {
self.run_spec(spec.as_ref(), 0);
}
None => {
info!("sequencial worker has no task to run");
self.outbox.send(Notify::Stop).unwrap();
return;
}
};
} else {
info!("sequencial worker is waiting for parallel workers finish");
std::thread::sleep(std::time::Duration::from_secs(1));
}
} else {
self.outbox.send(Notify::Stop).unwrap();
return;
}
}
};
}
})
} | start handle tasks | start | rust | nervosnetwork/ckb | test/src/worker.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/worker.rs | MIT |
pub fn new(
count: usize,
tasks: Arc<Mutex<Vec<Box<dyn Spec>>>>,
outbox: Sender<Notify>,
start_port: u16,
) -> Self {
let start_port = Arc::new(AtomicU16::new(start_port));
let sequencial_tasks = Arc::new(Mutex::new(Vec::new()));
let workers: Vec<_> = (0..count)
.map({
let tasks = Arc::clone(&tasks);
let sequencial_tasks = Arc::clone(&sequencial_tasks);
move |_| {
let (command_tx, command_rx) = unbounded();
let worker = Worker::new(
Arc::clone(&tasks),
Arc::clone(&sequencial_tasks),
command_rx,
outbox.clone(),
Arc::clone(&start_port),
);
(command_tx, worker)
}
})
.collect();
Workers {
workers,
join_handles: None,
is_shutdown: false,
}
} | Create n workers | new | rust | nervosnetwork/ckb | test/src/worker.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/worker.rs | MIT |
pub fn start(&mut self) {
self.workers.first_mut().unwrap().1.sequencial_worker = true;
let mut join_handles = Vec::new();
for w in self.workers.iter_mut() {
let h = w.1.clone().start();
join_handles.push(h);
}
self.join_handles.replace(join_handles);
} | start all workers | start | rust | nervosnetwork/ckb | test/src/worker.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/worker.rs | MIT |
pub fn shutdown(&mut self) {
if self.is_shutdown {
return;
}
for w in &self.workers {
if let Err(err) = w.0.send(Command::Shutdown) {
info!("shutdown worker failed, error: {}", err);
}
}
self.is_shutdown = true;
} | shutdown all workers, must call join_all after this. | shutdown | rust | nervosnetwork/ckb | test/src/worker.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/worker.rs | MIT |
pub fn join_all(&mut self) {
if self.join_handles.is_none() {
return;
}
// make sure shutdown all workers
self.shutdown();
for h in self.join_handles.take().unwrap() {
h.join().expect("wait worker shutdown");
}
} | wait all workers to shutdown | join_all | rust | nervosnetwork/ckb | test/src/worker.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/worker.rs | MIT |
pub fn mine_until_out_bootstrap_period(&self) {
// TODO predicate by output.is_some() is more realistic. But keeps original behaviours,
// update it later.
// let predicate = || {
// node.get_tip_block()
// .transaction(0)
// .map(|tx| tx.output(0).is_some())
// .unwrap_or(false)
// };
let farthest = self.consensus().tx_proposal_window().farthest();
let out_bootstrap_period = farthest + 2;
let predicate = || self.get_tip_block_number() >= out_bootstrap_period;
self.mine_until_bool(predicate)
} | The `[1, PROPOSAL_WINDOW.farthest()]` of chain is called as bootstrap period. Cellbases w
this period are zero capacity.
This function will generate blank blocks until node.tip_block_number > PROPOSAL_WINDOW.fa
Typically involve this function at the beginning of test. | mine_until_out_bootstrap_period | rust | nervosnetwork/ckb | test/src/util/mining.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/util/mining.rs | MIT |
pub(crate) fn ensure_committed(node: &Node, transaction: &TransactionView) -> OutPoint {
let closest = node.consensus().tx_proposal_window().closest();
let tx_hash = transaction.hash();
node.rpc_client()
.send_transaction(transaction.data().into());
node.mine_until_transaction_confirm_with_windows(&tx_hash, closest);
assert!(is_transaction_committed(node, transaction));
OutPoint::new(tx_hash, 0)
} | Send the given transaction and make it committed | ensure_committed | rust | nervosnetwork/ckb | test/src/specs/dao/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/specs/dao/utils.rs | MIT |
pub(crate) fn goto_target_point(node: &Node, target_point: EpochNumberWithFraction) {
loop {
let tip_epoch = node.rpc_client().get_tip_header().inner.epoch;
let tip_point = EpochNumberWithFraction::from_full_value(tip_epoch.value());
// Here is our target EpochNumberWithFraction.
if tip_point >= target_point {
break;
}
node.mine(1);
}
} | A helper function keep the node growing until into the target EpochNumberWithFraction. | goto_target_point | rust | nervosnetwork/ckb | test/src/specs/dao/utils.rs | https://github.com/nervosnetwork/ckb/blob/master/test/src/specs/dao/utils.rs | MIT |
pub fn new(new_work_tx: Sender<Works>, config: MinerClientConfig, handle: Handle) -> Client {
let uri: Uri = config.rpc_url.parse().expect("valid rpc url");
Client {
current_work_id: Arc::new(AtomicU64::new(0)),
rpc: Rpc::new(uri, handle.clone()),
new_work_tx,
config,
handle,
}
} | Construct new Client | new | rust | nervosnetwork/ckb | miner/src/client.rs | https://github.com/nervosnetwork/ckb/blob/master/miner/src/client.rs | MIT |
pub fn new(
pow: Arc<dyn PowEngine>,
client: Client,
work_rx: Receiver<Works>,
workers: &[MinerWorkerConfig],
limit: u128,
) -> Miner {
let (nonce_tx, nonce_rx) = unbounded();
let mp = MultiProgress::new();
let worker_controllers = workers
.iter()
.map(|config| start_worker(Arc::clone(&pow), config, nonce_tx.clone(), &mp))
.collect();
let pb = mp.add(ProgressBar::new(100));
pb.set_style(ProgressStyle::default_bar().template("{msg:.green}"));
let stderr_is_tty = console::Term::stderr().features().is_attended();
thread::spawn(move || {
mp.join().expect("MultiProgress join failed");
});
Miner {
legacy_work: LruCache::new(WORK_CACHE_SIZE),
nonces_found: 0,
_pow: pow,
client,
worker_controllers,
work_rx,
nonce_rx,
pb,
stderr_is_tty,
limit,
}
} | TODO(doc): @quake | new | rust | nervosnetwork/ckb | miner/src/miner.rs | https://github.com/nervosnetwork/ckb/blob/master/miner/src/miner.rs | MIT |
pub fn run(&mut self, stop_rx: Receiver<()>) {
loop {
select! {
recv(self.work_rx) -> msg => match msg {
Ok(work) => {
match work {
Works::FailSubmit(hash) => {
self.legacy_work.pop(&hash);
},
Works::New(work) => self.notify_new_work(work),
}
},
_ => {
error!("work_rx closed");
break;
},
},
recv(self.nonce_rx) -> msg => match msg {
Ok((pow_hash, work, nonce)) => {
self.submit_nonce(pow_hash, work, nonce);
if self.limit != 0 && self.nonces_found >= self.limit {
debug!("miner nonce limit reached, terminate ...");
broadcast_exit_signals();
}
},
_ => {
error!("nonce_rx closed");
break;
},
},
recv(stop_rx) -> _msg => {
info!("miner received exit signal, stopped");
break;
}
};
}
} | TODO(doc): @quake | run | rust | nervosnetwork/ckb | miner/src/miner.rs | https://github.com/nervosnetwork/ckb/blob/master/miner/src/miner.rs | MIT |
pub fn new(config: RpcConfig, io_handler: IoHandler, handler: Handle) -> Self {
if let Some(jsonrpc_batch_limit) = config.rpc_batch_limit {
let _ = JSONRPC_BATCH_LIMIT.get_or_init(|| jsonrpc_batch_limit);
}
let rpc = Arc::new(io_handler);
let http_address = Self::start_server(
&rpc,
config.listen_address.to_owned(),
handler.clone(),
false,
)
.inspect(|&local_addr| {
info!("Listen HTTP RPCServer on address: {}", local_addr);
})
.unwrap();
let ws_address = if let Some(addr) = config.ws_listen_address {
let local_addr =
Self::start_server(&rpc, addr, handler.clone(), true).inspect(|&addr| {
info!("Listen WebSocket RPCServer on address: {}", addr);
});
local_addr.ok()
} else {
None
};
let tcp_address = if let Some(addr) = config.tcp_listen_address {
let local_addr = handler.block_on(Self::start_tcp_server(rpc, addr, handler.clone()));
if let Ok(addr) = &local_addr {
info!("Listen TCP RPCServer on address: {}", addr);
};
local_addr.ok()
} else {
None
};
Self {
http_address,
tcp_address,
ws_address,
}
} | Creates an RPC server.
## Parameters
* `config` - RPC config options.
* `io_handler` - RPC methods handler. See [ServiceBuilder](../service_builder/struct.ServiceBuilder.html).
* `handler` - Tokio runtime handle. | new | rust | nervosnetwork/ckb | rpc/src/server.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/server.rs | MIT |
async fn ping_handler() -> impl IntoResponse {
"pong"
} | used for compatible with old health endpoint | ping_handler | rust | nervosnetwork/ckb | rpc/src/server.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/server.rs | MIT |
async fn get_error_handler() -> impl IntoResponse {
(
StatusCode::METHOD_NOT_ALLOWED,
"Used HTTP Method is not allowed. POST or OPTIONS is required",
)
} | used for compatible with old PRC error response for GET | get_error_handler | rust | nervosnetwork/ckb | rpc/src/server.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/server.rs | MIT |
fn remove_backtrace(err_str: &str) -> &str {
match err_str.find("\nStack backtrace:") {
Some(idx) => &err_str[..idx],
None => err_str,
}
} | Removes the backtrace portion from an error string. | remove_backtrace | rust | nervosnetwork/ckb | rpc/src/error.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/error.rs | MIT |
pub fn invalid_params<T: Display>(message: T) -> Error {
Error {
code: ErrorCode::InvalidParams,
message: format!("InvalidParams: {message}"),
data: None,
}
} | Invalid method parameter(s). | invalid_params | rust | nervosnetwork/ckb | rpc/src/error.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/error.rs | MIT |
pub fn custom<T: Display>(error_code: RPCError, message: T) -> Error {
Error {
code: ErrorCode::ServerError(error_code as i64),
message: format!("{error_code:?}: {message}"),
data: None,
}
} | Creates an RPC error with custom error code and message. | custom | rust | nervosnetwork/ckb | rpc/src/error.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/error.rs | MIT |
pub fn custom_with_data<T: Display, F: Debug>(
error_code: RPCError,
message: T,
data: F,
) -> Error {
Error {
code: ErrorCode::ServerError(error_code as i64),
message: format!("{error_code:?}: {message}"),
data: Some(Value::String(format!("{data:?}"))),
}
} | Creates an RPC error with custom error code, message and data. | custom_with_data | rust | nervosnetwork/ckb | rpc/src/error.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/error.rs | MIT |
pub fn custom_with_error<T: Display + Debug>(error_code: RPCError, err: T) -> Error {
let err_str_with_backtrace = format!("{err:?}");
let err_str = remove_backtrace(&err_str_with_backtrace);
Error {
code: ErrorCode::ServerError(error_code as i64),
message: format!("{error_code:?}: {err}"),
data: Some(Value::String(err_str.to_string())),
}
} | Creates an RPC error from std error with the custom error code.
The parameter `err` is usually an std error. The Display form is used as the error message,
and the Debug form is used as the data. | custom_with_error | rust | nervosnetwork/ckb | rpc/src/error.rs | https://github.com/nervosnetwork/ckb/blob/master/rpc/src/error.rs | MIT |
Subsets and Splits