Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
218 Cards in this Set
- Front
- Back
DEP
|
Data Execution Prevention
|
|
ASLR
|
Adress Space Layout Randomization
Map shared libraries to random location in process memory -> address of executable code unknown to attacker |
|
IDS
|
Intrusion Detection System
|
|
CSS
|
Content Scrambling System
|
|
AACS
|
Advanced Access Content System
|
|
CVV/CVV2
|
Card Verification Value
|
|
EMV
|
Europay Mastercard Visa
|
|
SET
|
Secure Electronic Tranfer
|
|
Trojan Horse
|
overt purpose (known to user)
covert purpose (unknown to user): Back doors, Keyloggers, Web clickers, Proxies In classical sense: Do not reproduce themselves |
|
Virus
|
Program that inserts itself into one or more programs and performs actions
Insertion phase (necessary) Execution phase (optional) Contrast to worms: Human has to run infected program to propagate virus Types: Boot Sector Infector Executable Infector Terminate and Stay Resident (TSR) Macro Viruses Encypted Viruses Polymorphic Viruses Metamorphic |
|
Social Engineering
|
Trick totally retarded idiot users into "voluntarily" installing malicious code
|
|
Rootkit
|
Often modifies parts of OS
May install "Back Door" Stealthy Infection Path: Stolen password / dictionary attack Buffer overflow to gain root priviliges Download rootkit, unpack, compile, install |
|
Function Hooking
|
Change pointer in OS's Global Offset Table
Insert jump in legitimate function |
|
Worm
|
Programm that copies itself from on PC to another
|
|
Botnets
|
Network of compromised computers where bot is installed
Remotely controlled by attacker (Command and control = C&C) Attacker = Herder Infected PC = Zombie / Drone Can have other malwares characteristics like spyware, worm, backdoor, ... Lifecycle: Creation Infection Rallying Bots start up and try to contact C&C server Waiting Execution |
|
C&C Centralized
|
|
|
C&C Decentralized
|
|
|
C&C Prevention
|
Locate C&C servers and take them down
- Anaylze network traffic -Analyze behavior / code of bots -contact infected owner Make C&C server impossible to contact -Block hostname in DNS -Register hostnames calculated by bots -Block IP range of C&C infrastructure -Disconnect rogue hosting companies |
|
Make money with botnets
|
Spam, Phishing, Malware/Adware, Stealing information, DDoS, Extortion, Click-Fraud
|
|
Antivirus
|
Recognize only known malware
cannot deal with viruses with new signature Heuristics for known patterns in polymorphic algorithms Find modiefied files (checksum, MACs) Emulate CPU execution Defense: Distinguish between data, instructions Limit objects accessible to processes Inhibit sharing Detect altering of files Detect actions beyond specifications Analyze statistcal characteristics |
|
Detect new Bots
|
Anomality detection on infected host
monitoring network traffic Honeypots |
|
Honeypot
|
|
|
SCADA Systems
|
Supervisory Control and Data Acquisition System
Monitor and control industrial processes Often across several physical locations Often not well protected (users think systems are safe: uncommon software, no internet, etc.) |
|
Zero-Day exploit
|
Exploits that were previously unknown to security experts
|
|
Stuxnet
|
Uses Vulnerability in Windows OS
search for SCADA control software Infect control via vulnerability in WinCC/Step7 Manipulates the attached PLCs 3 Zero-Day exploits Use signature keys of valid certificates of Taiwanese companies Initial infection via USB storage device Modifies PLC only if: PLCs connected to frequency converter Frequency converter produced by one of two manufacturers Attached motors operate at 807-1210 Hz Then Modifie frequency Target: Iranian nuclear plants Update via C&C |
|
Security Mechanisms on Smarphones
|
Access rights for apps
Apps typically run in sandboxes Code signing: -Can trace author -No unauthorized updates Service connection -remote cleanup of infected system -may delete data and applications from lost or stolen device Appstores -Primary source of third party software -Apps can be controlled by the market operator |
|
Security Problems Android
|
Sandboxing, access rights:
Any two applications of the same author can mutually access their data -> both application share access rights Codesigning: -no central certification authority -> authors cannot be traced Access rights: Can only be accepted on "all-or-none" basis |
|
Security Mechanisms and Problems iOS
|
Codesigning:
-Developer has to register with Apple -Only signed applications are executed Appstore: -Only apps from Appstore can be executed -> code review for all apps Jailbreaking: allows unauthorized apps -> leaves device unprotected |
|
Attractiveness of Android to Malware
|
Widely distributed
Easy to attack Authors cannot be traced back easy developement Difficulties in update cycle: -Base Android gets fixed -Manufacturers use custom android -> takes time to integrate base updates to custom version -Takes even more time before user actually updates |
|
Android Infection
|
Downloading malicious apps (for free version, apps that become malicious after update)
Drive by download |
|
Android Malicious Funcionality
|
SMS/Calls to premium numbers
Automated purchases of apps SMS Spam distribution Data Theft Relaoding Malware Bot functionality install Adware Destructive malware Infection of connected computers |
|
Buffer Overflows
|
Assumption: Programmer is responsible for data integrity -> Programmers can write programs that are vulnerable to buffer overflows
Once a variable allocates memory data of arbitrary size can be put in the variable -> overflow |
|
Text Semgent
|
Text (Code) segment stores machine language instructions
No write permission in text segment -> code cannot be modified Text segment has fixed size |
|
Data and Bss Segment
|
Stores global and static program variables
Data segment: -Filled with initialized global and static variables Bss: -Filled with the uninitialized counter parts Segments are writable but have fixed size |
|
Heap Segment
|
Controlled directly by programmer
Memory blocks can be allocated/reserved explicitly Variable size |
|
Stack Segment
|
Variable Size
Temporarily store local function variables and context Stack is used to remember: -Passed variables, return location for EIP, local varialbes Extendd Stack Pointer (ESP) keeps track of current end of stack |
|
Buffer
|
Data storage area inside stack or heap
Intended to hold pre-defined amount of data Executable code can be supplied as "data" |
|
Buffer overflow
|
IT-Sec Chp 6 p 15-17
|
|
Code Injection
|
Executable atack code is stored on stack, inside the buffer containing attackers string
Attacker must correctly guess in which stack position his buffer will be when the function is called Used vulnerability: No range checking for many operations (strcpy, scanf, printf) |
|
Return Address Attack
|
Change return address to point to attack code
or use existing instruction in the code segment (system(), exec(), etc.) |
|
Pointer Variables Attack
|
Change a function pointer to point to attack code
Any memory even not in the stack, can be modified by the statement that stores a value into the compromised pointer |
|
Off-By-One Overflow
|
Overflow of size one
Cannot change RET but can change pointer SFP to previous stack frame |
|
SFP
|
Saved frame Pointer
|
|
RET
|
Return Address
|
|
Frame Pointer Attack
|
Change the callers saved frame pointer to point to attacker controlled memory. Callers return address wil be read from this memory
|
|
Heap Spraying
|
Use Java Script to "spray" heap with shellcode
Point vtable ptr anywhere in the spray area |
|
Memory Management Errors
|
Initialization errors
Failing to check return values Writing to already freed memory Freeing the same memory more than once Imporperly paired memory management function Failure to distinguish sclars and arrays Imporper use of allocation function -> Exploitable Vulnerabilities |
|
Format Strings
|
|
|
Null HTTP Heap Overflow
|
When corrupted buffer is freed, overflown value is copied to location whose address is also read from overflown area
Enables attacker to write an arbitrary value into a memory location of his choice |
|
Buffer Overflow Prevention
|
Use safe programming languages (e.g. Java)
Mark stack as non-executable Randomize stack location or encrypt return address on stack Static analysis of source code to find overflows Run-time checking of array and buffer bounds Black-box testing with long strings |
|
W xor X / DEP
|
Write xor execute /
Data Execution Prevention Blocks (almost) all code injection exploits Hardware support: AMD "NX" bit, Intel "XD" bit Still saved EIP blocked) -> return-to-libc exploit Some applications require executable stack (Example: Lisp interpreter) Some applications are not linked with / NXcompat JVM makes all its memory RWX - readable, writable, executable |
|
Static Analysis
|
Catch buffer overflow bugs by looking at source code
Soundness: find all instances of buffer overflow Completeness: Every reported problem is indeed an instance of buffer overflow |
|
Wagner et al. Approach
|
Treat C strings as abstract data types, assume C strings are accessed only through library functions -> Pointer arithmetic is greatly simplified
Characterie buffer by allocated size and current length Determin acceptable range for these values at each point of the program Chapter 6 p 56-60 |
|
Stack Guard
|
Embed "canaries" in stack frames and verify their integrity prior to function return
-> Overflow of local variables will damage the canary Canary: Choose random canary string on program start (attacker can't guess" Terminator Canary: any termination symbol for C string library functions such as "\0", newline, linefeed ,EOF -> String functions like strcpy wont copy beyond symbol |
|
PointGuard
|
Attack: overflow function pointer to point at attack code
Solution: Encrypt pointers with random key (XOR) -> pointers cannot be overflown If pointer is overwritten, overwritten pointer is xored and dereferences to "random" memory address |
|
Access Control
|
Controls which subjects have which fomr of access rights to which objects in a system
Subjects: e.g. persons, processes, machines, ... Objects: e.g. files, programs, ports,.. Rights: e.g. read, write, execute, append... Authentication: proof that subject is who it claims to be Authorization: determine who is authorized to access an object |
|
Discretionary Access Control
|
Define owner
Owner decides who may access an object |
|
Mandatory Access Control
|
System-wide security policy decrees access to objects
|
|
Access Control Matrix
|
Disadvantages:
Size of full matrix will use much storage most entries in the matrix will be blank or same (e.g. default settings) Storage Management (due to creation and deletion of objects) |
|
Access Control Lists
|
Idea: For each object store set of subjects and its set of rights
Possible Improvement: divide subjects into classes specify access rights for classes instead of subjects BUT: Loss of granularity |
|
ACL Modification
|
Approach1 (E.g. Unix):
Creat ACL with object Creator has all rights Creator may change ACL Approach 2: Anyone with a particular right is allowed to change ACL |
|
ACL Wildcards
|
E.g. UNICOS:
holly: *:r means holly can access the object regardless of the group the object is in *:sys:r remans that any process with group id sys can read the object |
|
ACL Conflicts
|
1. Allow access if any permission allows it
2. Deny access if any permission denies it 3. Apply the first entry that matches the subject |
|
ACL Right revocation
|
1. Case granting rights is controlled by owner: subjects can be deleted from ACL / rights channged in ACL
2. Case granting rights not only by owner Example: A grants B grants C A revokes B should revoke C A grants B grants C & D grants C A revokes B should not revoke C |
|
Access Control Unix
|
Classes of subjects: owner, group, rest
Rights: read, write, execute Access control displayed with flag for directory(d) or not (-) Each process has three user IDs and three group IDs: real uid, effective uid, saved uid real gid, effective gid, saved gid Real user ID: identifies owner of process Effective user ID: can be assigned to a process by a system call (setuid) Saved user ID: stores a previous user ID such that it can be restored |
|
Windows Access Control
|
rights: read, write, execute, delete, change permission, take ownership
gerneric rights: no access, read (read, execute), change (read, execute, write, delete), full controll (all rights) Special access right: allows assignment of any other combination of base rights Generic rights (directory): No access, read, list (List content and change to subdirectory), add and read, change (create, read, execute, or write files within the directory, delete subdirectories), Full control If any ACL denies access windows denies access if access is not explicitly denied and user is named in the ACL (directly or group) user has union of rights from regarding ACL entries otherwise access denied |
|
Capabilities
|
For each subject lists objects and the subjects' rights
Disatvantage: hard to determin who is allowed to access a given object |
|
Locks and Keys (Access Control)
|
Combines features of control lists and capabilities
Idea: Piece of information (lock) is associated with each object A second piece of infomration(key) is associated with those subjects htat are allowed to acces the object If subject has a key to any of the locks of an object appropriate access right is granted More dynamick, locks and keys can change based on system constraints or other factors |
|
Firewall types
|
Packet filters
Stateful firewalls Application layer firewalls |
|
Firewall
|
Controls access between an intenal network and an external network
Simple: Filter IP packets based on addresses and ports Advanced: Network Address Translation Service differentiation to allow priorizing (e.g. for VoIP) Inspect content of packets: Filter other related traffic correctly block packets that contain offensive information block intrusion attempts Security policy: Arriving packet is accepted/denied denied packet can be silently dropped or bounced back Some firewalls log information about arriving packets Policy is list of rules rules consists of tuples and actions tuples correspond to e.g. protocol type, souce Ip, destination IP, souce port, destination port |
|
Firewall rules
|
rules consists of tuples and actions
tuples correspond to e.g. protocol type, souce Ip, destination IP, souce port, destination port rules may contain wildcards policy R can be described by three sets A(R) set of packets that will be accepted D(R) set of packets that will be denied U(R) set of packets that do not match any rule comprehensive policies have U(R)=empty |
|
Best-match policies (Firewall rules)
|
Packets are compared against every rule -> determine which rule most closely matches every tupel
|
|
First-/Last-match policies
|
Action of the first/last matching rule is performed
|
|
(Half-)Shadowing (firewall policy)
|
Unintended sonsequence of adding rules in a certain order
Occurs if earlier rule matches (some) every packet that another lower rule matches |
|
Packet Filter
|
Only filters at network and transport layer
similar to network routers Typically only consider IP addresses Port numbers Transport Protocol type Filtering MAC addresses also possible (prevents from IP spoofing) |
|
Stateful Packet Firewalls
|
Same operations as packet filters
But maintain state about packets that have arrived Pro: Allow arriving packets associated with an existing connection Stateful firewalls can support more restrictive policies that allow incoming traffic only from servers as response to user requests Example FTP: Uses 2 connections (controll channel, data channel) -> stateful packet filter needed Possible states: New, Established (packets in both directions have been observed), Related(connections related to existing connections e.g. FTP) |
|
Application Layer Firewalls
|
Filter traffic at network, transport, and application layer
Often proxy capabilities Firewall can inspect content of the packet Today combined with intrusion detection (detect viruses, spam, attack signatures) Only for content that is not encrypted |
|
Host and Network Firewalls
|
Host firewalls only protect one computer
Network firewall located at gateway of network protect all computers in an internal network Must handle high bandwidth Placement of firewall is important (one policy for all? different devices in internal network?) |
|
DMZ
|
Demilitarized Zones
|
|
Perimeter Networks
|
Subnetwork of computers outside the internal network
Router between external network and perimeter network de militarized zones connect to external network via firewall -> DMZ higher level of protection |
|
Two-Router Configuration
|
Only Bastion host visible from outside
Bastion host used as filter and/or proxy Less security than DMZ |
|
Dual-homed Host
|
Needs at least two network interfaces
-One connected to external network -One connected to internal network Used for: -Packet filtering -Payload inspection -NAT, proxy service |
|
Routing Protocols
|
Determine which devices in the internal network will need to receive or send routing information
|
|
ICMP
|
Internet Control Message Protocol
Routers and other devices monitor network -> when error occurs send message via ICMP (e.g. time exceeded, echo requests, destination unreachable) ICMP messages can be used as part of attacks E.g. ping uses ICMP echo request -> attacker can find connected hosts Traceroute uses ICMP to determine path between source and destination Not all ICMP should be blocked (e.g. MTU messages should be allowed) |
|
Network Time Protocol
|
Allows for synchronization of system clocks
Required for distributed applications |
|
DHCP
|
Dynamic Host Configuration Protocol
DHCP servers typically located in the internal network DHCP should typically not egress through firewall |
|
NAT
|
Network Address Translation
E.G. Firewall, Router Allows for sharing smaller set of IP addresses across larger number of computers Intercepts packets from internal nodes, replaces private source address with public IP of NAT device Incoming traffic separated via different port numbers |
|
Firewall Arrays
|
Processes arriving packets in parallel
Used for High-Bandwidth / Low latency applications Each firewall identically configured Load Balancer assignes packets to firewall with fewest packets awaiting processing Disatvantages -Load balancing not trivial (hard to predict how long firewall requires to process a packet) -Maintaining state of one connection is difficult Advantages -Scalable -Robutstness (if one firewall fails integrity of system remains) -Easy policy management (one for all) |
|
General Problems of Firewalls
|
Interfere with networked applications
no DOS protection no insider attacks prevention Increasing complexity and potential for misconfiguration Firewalls cant decide for encrypted connections / tunnels |
|
Intrusion Detection
|
Detection of
Known and (ideally) unknown attacks E.g.: Attempted and successful break-ins by masquerading Illegitimate use of root privileges unauthorized access to resources and data Malware DOS ... Should be fast Understandable notifications Minimize false positive and false negative rates |
|
Dennings Hypothesis (Intrusion Detection)
|
Computers NOT under attack
-Actions of users/processes accord to statistically predictable pattern -Actions do not include sequences of commands to subvert security policy -Actions conform to a set of specifications |
|
Classification IDS
|
Anomaly detection:
Detect deviation from usual behavior (statistical) Misuse(Signature based) detection: Compare actions and states with known sequences of actions and states while under attack (only detects known attacks) Specification-based detection Classifies states and actions that violate specification as bad (minor role today) In practice often combination |
|
Threshold Metrics (IDS)
|
Count number of events that occure if number falls outside a range -> anomalous
Difficulties: Appropriate threshold may depend on non-obvious factors (typing skil, us vs ger keyboard) |
|
Statistical Moments (IDS)
|
Analyze normal behaior
Mean and standard deviation measures of correlation If measured values fall outside expected interval -> anomalous potential problem Profile of expected behavior may have to evolve over time |
|
Profile (IDS)
|
Login and session activity
-login and location frequency, last login, password fails ,... Command and program execution -Execution frequency, program CPU, I/O, other resources... File access activity -Read, write, create, delete frequency, failed reads, writes, ..... |
|
Markov Model (IDS)
|
Previous state affects current transition between states
Transition if event occurs Transitions with probabilities If an event causes a transition associated with a very low probability, this is reported as anomaly Main diffrence to anomalie based: decision based on sequence of events not on single event Problem: train system to establish valid sequences |
|
IDS Architecture
|
Agent like logger: gathers data for analysis, may put information into another form, may delete irrelevant information, more information may be requested by director, host based or network based, detect different types of attacks
Director like analyzer: analyzes data from agents according to internal rules Notifier: Obtains results from director and takes some action |
|
Attacking / Evading NIDS
|
Overload NIDS with huge data stream
Use encryption to hide packet contents Split malicious data into multiple packets -> NIDS does not have full TCP state and does not allwys understand every command of receiving application |
|
AAFID (IDS)
|
Autonomous Agents
Distribute "director" among agents Agents communicate, decide jointly if intrusion or not Advantages no single point of failure Compromise of one agent does not affect others Agents can migrate if needed scalable Disadvantages Communication overhead higher Securing communication can be hard and expensive Distributed computation involved |
|
Incident Prevention
|
identify attack before it completes
Prevent from completing Jails useful -Attacker placed in sandbox -Attacker downloads files but not the real ones -Figures out what attacker wants |
|
Intrusion Handling
|
Six phases:
Preparation for attack Identification of attack Containment of attack (limit damage) Eradication of attack (stop attack and block further similar ones) Recovery from attack(restore to secure state) Follow-up attack(Monitoring and taking actions against attacker, record lessons learned) |
|
IDIP protocol engine
|
monitors connection passing through members of IDIP domains
If intrusion is observed report to neighbors neigbors propagate information about attack trace connection or datagram to boundary controllers boundary controllers coordinate response |
|
Bayes' Theorem
|
|
|
Positive recognition (Authentication)
|
System verifies that an individual has a claimed identity
|
|
Negative recognition (Identification)
|
Process by which the system establishes that an individual is indeed enrolled in the system (although the individual might deny it)
Applications: identifying criminals, social welfare double dippers etc. |
|
Examples Biometric Traits
|
|
|
Biometric system
|
Pattern recognition system
-Acquires biometric data from an individual -Extracts a salient feature set from the acquired data -Compares feature sets -Executes action based on comparison result Components: Sensor module Feature extraction and quality assessment module Matching and decision making module sytem database module |
|
Performance Measurement (Biometrics)
|
No exact results:
Imperfect sensing conditions Alteration in users biometric characteristics Changes in ambient conditions (light, temperature) Variations of users interaciton with sensor Intra-class variation: variability in feature sets of same individual Inter-class variation: variability between feature sets of different individuals Best: High inter-class and low intra-class variation |
|
Similarity score
|
Indicates degree of similarity of two feature sets
Match score is score resulting from matching two feature sets of the same individual Impostor score results from matching two sets of different individuals Threshold t defined as decision point False accept rate (FAR) False reject rate (FRR) FAR and FRR are a trade-off regulated by t |
|
Biometric Operating Characteristics
|
|
|
Doddingtons Zoo
|
Sheep: low intra-class variation
Goat: large intra-class variation Lambs: low inter-class variation Wolves: users that can successfully manipulate their biometric traits to be accepted for someone else's Idea: Treat sets that are marked as one of the classes above different (different threshold) |
|
Biometric Characteristics
|
Universality: Every individual accesing the application should posses the trait
Uniqueness: Trait should be sufficiently different across individuals Permanence: Biometric trait should be invariant over a period of time Measurability: Trait should be acquirable and digitizable without undue inconvenience to user Performance: Recognition accuracy and resources required to achieve accuracy should meet the applications' constraints Acceptability: INdividuals should be willing to present their biometric trait to system Unfakeability?: Should be difficult to imitate an individual using artifacts and mimicry |
|
Face recognition
|
Non-intrusive
ranges from static controlled mug-shot authentication to dynamic uncontrolled face identification in a cluttered background Approaches based on: location and shape of eyes, eyebrows, nose, lips, chin Current commercial authentication systems impose restrictions: Fixed simple background, controlled illumination |
|
Fingerprint
|
Used a long time
Matching accuracy very high Low cost scanners Huge amount of computational effort in identification mode Fingerprints are NOT universally available (~ 4% of population are unsuitable e.g. genetic reasons, aging, occupational reasons) Pores matching Minutiae matching |
|
Hand Geometry
|
Measures shape, size of palm, length and width of fingers
Widely used Simple, easy, cheap, Disadvantages: not very distinctive Limitation of dexterity (e.g. from arthritis) Physical size of hand makes it inapplicable e.g. for laptops |
|
Palmprint
|
Pattern of ridges and alleys much like fingerprints
Larger area and more distinctive than fingerprints Bulkier than fingerprint scanners Using high resolution palmprint scanner would allow to use all features of the hand (hand geometry, palmprint, fingerprint, principle lines, ...) |
|
Iris
|
Texture carries very distinctive information
Accuracy and speed very promising to support large scale identification Requires considerable user participation low false accept rates but high false reject rates compared to other traits |
|
Keystroke
|
Hypothesized that each person types on a keyboard in a characteristic way
Not expected to be unique to each individual Expected to be sufficiently discriminatory to permit authentication Behavioral biometrics, large intra-class variation Acquiring could be done unobtrusively as person keys in information Continuous authentication |
|
Signature (Biometrics)
|
Way a person signs his name
Requires contact and effort from user widely accepted in governmental, legal, commertial transactions Behavioral biometric changes over a period of time is influenced by physical and emotional conditions High intra-class variations for some people Professional forgers are very good at reproducing signatures |
|
Voice (Biometrics)
|
Combination of physical and behavioral biometric
Physical features based on shape and size of appendages (vocal tracts, mouth, ...) Physical characteristicsare invariant for each individual Behavioral aspects change over time (age, emotion, ..) Not very distinctive sensitive to background noise Sometimes only usable biometrics (E.g. authentication over the phone) |
|
Gait (Biometrics)
|
Maner inwhich a person walks
Appropriate for surveillance scenarios Gait is affected by factors like Footwear, nature of clothing, affliction of legs, walking surface etc. |
|
Multimodal Biometrics
|
|
|
Biometrics Vulnerabilities
|
Circumvention:
Attacker gains acces to protected resources -> e.g replace database templates, override matcher decision Covert acquisition: Attacer uses biometric information captured from legitimate users e.g. playback of voice password, lifting latent fingerprints Collusion or coercion: Attacker collides or collaborates with legitimate user(willingly: collusion, unwillingly: coercion) Denial of Service: Attacker prevents legitimate use e.g. enrolling many soisy samples -> decrease threshold, increases false acceptance rate Repudiation: Attacker may claim not to have accessed a protected resource by claiming that his data was stolen Biometrics are not secret Biometrics cannot be revoked Biometrics have seondary uses (if same feature is used by different apps, user can be tracked if organizations share data) Features can carry private information (genetic disease, use of medication, ...) Automatic identificationand profiling is privacy threat |
|
Attacks Against Biometric Systems
|
Spoofing:
Attacker presents faked biometric sample -Avoid detection -Masquerade as another individual Attack against sensors: Subvert or replace sensor hardware Segmentation: Escape surveillance by failing the system to detect thte presence of the appropriate feature (e.g. cover one eye) Replay attacks: Attacker intercepts output flow of the sensor and puts previously intercepted genuine biometric into proper place to gan access Malware-based attacks: Attacker replaces original extractor or matcher with a fake one Attacks against feature extraction: If extraction algorithm is known, attacker can try to construct special features that allow for impostor Attacks against quality control: Attacker tries to pollute the template data base with lambs-> threshold goes down Biometric templates require to be saved in plaintext |
|
Spoof detection (Biometric)
|
Differentiating between a genuine biometric trait presented from the right live persion versus some other source
Approaches: Sensing vitality signs (pulse, sweat, temperature) Acquiring several raw data samples (e.g. pictures from different angles) Using challenge response techniques |
|
DRE
|
Direct-recording electronic voting system
Records votes by means of an electronic display processes voter selections by means of a computer program |
|
Forms of Electronic Voting
|
Web polls (e.g doodle)
Electronic voting machines (capture votes at the polling station) Online voting: voting possible from any PC with internet Networked polling station: enable voters to vote from any polling station Networked voting machines: allow for automatically adding up votes of different machines E-Counting: Using electronics to accelerate counting paper votes |
|
E-Voting Secuirty
|
Was you vote captured correctly?
Was your vote counted correctly? Can the tally be independently verified? Is your vote anonymous? Can anyone sel their vote? Can voters be coerced to vote in a particular way? |
|
Three Ballot Voting
|
Goal: Achieve as many of security properties of cryptographic voting protocols without using cryptography
Why no crypto? Hard to understand often not trusted by non crypto people Candidate that should NOT be elected gets one vote Candidate that should be elected gets two votes Voter gets a copy of one of the ballots all three original ballots are thrown into ballot box Votes for candidate X = # votes for X - # of voters Voters can check election by checking if serial number on ballot copie is used in the published ballots Weaknesses: Trust in casting process required Missing usability (trial runs show that it is confusing) Use of ballots with serial numbers is forbidden in many countries |
|
Bingo Voting
|
Basic building block of many cryptographic protocols
Consists of two operations -Commitment: Commitment c is computed from a message m and published. From c no information about m is revealed, m cannot be changed without being noticed in the opening phase -Open/Unveil: Additional information is released (e.g. m itself and a random number) and anyone can check that c was a commitment on m Intuitively: m is put into a public safe but the key to the safe stays at the committing party, only in the unveiling pahse the key is handed out Commitment scheme should be Hiding: no information about m is revealed by c binding: for c corresponding to m it is hard to open c to another message m' <> m The voter chooses his candidate at the machine A trusted random number generated in the booth generates a fresh random number and assigns it to the selected candidate The machine prints a record that lists pairs of candidates and random number the voter’s candidate of choice is associated with the freshly generated random number the other pairs are chosen by the machine from the set of pairs to which commitments were generated before the election Number of unused DummyVotes left equals candidates number of votes Voter can check the receipt against the number displayed by the pseudeo random number generator |
|
Pedersen Commitments
|
WTF????
Chapter 11 p22,23,29 |
|
Scantegrity
|
Each ballot has a human readable and a machine readable serial number
Each candidate is randomly assigned a letter on each ballot The voter marks its candidate, rips off the serial number part and writes down the letter assigned on his ballot to the candidate of his choice The letter will allow the voter later on to check that his vote was counted Voter feeds the ballot into the scanner Voters can verify their vote after serial number and corresponding letters are published Problems: No-one can prevent a voter from writing down another letter than the one corresponding to his elected candidate on his ballot -> Hard to differentiate between voter-caused and scan-caused mismatch between record and voter's memo |
|
Circuit Switching Board
|
Allows for tally verification, shows ballot outputs but in random order
|
|
Reasons for electronic voting
|
Advantage of direct recording
Decrease in voter error if systems interface is designed properly Can accommodate people with different disabilities (helping them to vote without hman assistance) |
|
Voting machines Vulnerabilities
|
Install malwear (e.g. to record incorrectly or miscount)
Viruses (propagation between voting machines and between voting machines and election management system could enable large-scale election fraud Failure to protect ballot secrecy e.g. votes are orderd with timestamp -> attacker can observe who did which vote Vulnerability to malicious insider county workers could exceed their authority Anyone with access to a county's GEMS server could tamper with ballot definitions or election ,.... Data integrity no safeguards against corrupted or malicious data injection Cryptography applied but easy to circumvent Access Control: could easily be circumvented Software has numerous programming errors, including buffer overflows, format string vulnerabilities, type mismatch errors,... No or poor exception handling |
|
DRM
|
Digital Rights Management
|
|
DRM motivation
|
-keep information from people who haven’t paid for it and protect rights of creator of information
-impose limitations on usage of digital content -Prevent copying -Prevent copying more than x times -Prevent playing/displaying more than once -Prevent playing/displaying after certain time |
|
How DRM setting differs from typical network security setting
|
classicaly: A and B want to protect messages exchanged
DRM: A is content owner that intends to sell information to unreliable customer B and prevent B from further disseminating this information problem: content eventually has to be available in clear on B’s side solution: B can process data only on device trusted by A |
|
DRM areas of application
|
Software
E-books any types of documents audio video TV broadcast |
|
General principle of DRM systems
|
-Make media file available that is encrypted with media key
-Sell “license” which contains 1) Media key encrypted with license key 2) Additional statements on what user is allowed to do with decrypted file (expressed in rights management language) -Make license key available to media player trusted by content provider Typically requires cooperation with manufacturer of media player Media player interprets rights statements included in license |
|
DRM and TV broadcast
|
Conditional access systems:
1) Subscription management service at station -Station encrypts outgoing video (with control word) and embeds messages in outgoing stream (first using authorization key to encrypt control word, then user key to encrypt entitlement messages (EMM) and authorization key) -Issues access tokens such as smart cards to subscribers 2) Set-top box converts cable or satellite signal for TV -Provides EMM to card (indicates which user is entitled to get authorization key) -Also decrypts incoming video with control word if allowed 3) Subscriber smartcard personalizes box -Controls what programs set-top box is able to decrypt -Decrypts and interprets EMMs to determine rights and only decrypts control word if user authorized to view content -Provides control word to set-top box Attacks on conditional access systems: 1) Control word recording -Code words could be recorded when sent from smart card to set-top box and posted on internet -Other people can record broadcast, download control word and decrypt video later 2) Cryptanalysis on stream cipher 3) Blocking “kill-command ECMs” addressed to card 4) User key leakage -Allows for fabrication of fake smart cards |
|
DRM and software
|
-Main problem: Computers cannot be considered to be trusted devices, because no uniqueness available to bind software to hardware
-Three general approaches tried in past 1) Add hardware uniqueness via dongle -Simplest version: carried a serial number -More advanced: execute simple challenge-response protocol -Most advanced: perform some critical part of computation on dongle 2) Create uniqueness on machine -Software installs itself on PC’s hard disk such that it is resistant to naïve copying 3) Use uniqueness that exists on machine already by chance -Store PC’s configuration and check against that (cards present, memory available, type of printer attached …) -Psychological techniques: installation routine embeds registered user’s name and company on screen (e.g. toolbar) --> discouragement to distribute copies -Legal solutions: -Establish anti-piracy trade organization and use it to prosecute -Harassment with threatening letters -Today: -Software industry combines technical and legal measures -Site licenses: Using license servers instead of dongles that limit number of copies that can run simultaneously -Online registration: Large-scale commercial counterfeiting can be detected by monitoring product serial numbers registered online -Rewards for whistleblowers -Other business models: Limited or older version for free and sell password to unlock full functionality or updates, Free copies to universities but not to companies, Make money from selling support or advertisements |
|
DRM and DVDs: CSS
|
Content Scrambling System
Copy-protect DVDs Uses three types of keys 1) Player keys kp -Used to encrypt disk key for each player -Shared between player and content provider 2) Disk keys kd -Used to encrypt title keys -Disk key encrypted with each player key and all ciphertexts stored on disk 3) Title keys -Stored on disk encrypted with disk key Player stores one of player keys, decrypts disk key with player key, checks hashed and encrypted disk key stored on DVD, uses disk key to decrypt title keys, uses title keys to decrypt content Problems -On each DVD encrypted hash key available --> easy to check if guess on kd is correct or to brute force kd -Due to weaknesses in CSS stream cipher the complexity of recovering kd can be reduced to 225 -Weak encryption with too short key length -Based on manufacturer keys staying secret (but no built in tamper-resistant processors in devices) |
|
DRM and DVDs: AACS
|
Advanced Access Content System
-Encryption based on AES -Key management based on three-based media key block scheme 1) Device keys: each device is given a set of secret device keys during manufacturing 2) Media key: used to encrypt title keys which in turn are used to encrypt data blocks on disc 3) Media key block: allows players with different sets of device keys to calculate same set of keys and decrypt media key -Encrypted with several device keys -Indicates which device keys to use one by one to decrypt correct media key Subset difference trees: -There is a master tree of keys starting with a root key Each device # Is associated with exactly one leaf key # Does not know that one leaf key # Obtains a set of device keys # Can use this set of keys to compute any key in tree except for keys between its associated leaf key and root key -In addition there is a tree of keys for every sub tree of master tree with same properties -In each tree child keys are derived from their parents with one-way hash function Device revocation: -Single device: include encryption of media key with corresponding leaf key in media key block -More than one device close to each other: encrypt media key with key on higher level in master tree -More than one device not contiguous: with help of sub trees or encrypting media key subsequently with device keys |
|
DRM and audio
|
-Stopped on audio CDs in 2007 (cost didn’t measure up to result)
-iTunes and Napster also offer DRM-free mp3 downloads -Problems of DRM systems formerly used for music downloads # No interoperability between different online stores # Each store typically required user to install software # Music downloaded from different stores could only be played on certain hardware # Platform vendors financially gained from these lock-ins but not actual copyrights holder # All DRM mechanisms seem to get broken sooner or later |
|
Watermarks and fingerprinting in DRM
|
-Hidden watermarks: hiding messages with purpose of recording copyright owner, purchaser and distributor
# Hiding marks in least significant bits of audio or video data # Determine location of mark with secret key # Quality: •Robustness: how reliable is extraction of watermark under data modification during regular usage such as compression or targeted data manipulation with purpose of destroying watermark •Transparency: how noticeable is acoustic or optical change of quality created by marking •Amount of data that can be embedded •How fast is embedding / extracting procedure -Fingerprints: content identification with purpose of proofing that two multi-media objects are equal |
|
Legal issues of DRM
|
-World Intellectual Property Organization passes Copyright Treaty (1996):
# Requires participating nations to adopt laws against DRM circumvention -Digital millennium copyright act (US version): # Forbids circumvention of technological measure that effectively controls access to a work if done with primary intent of violating rights of copyright holders # Act contains exceptions for research and reverse engineering for interoperability purposes -European 2001 directive on copyright (European version): # Only applies to commercial purposes not private copies |
|
Legal issues of DRM-free media
|
-Dissemination of downloaded media
# Most download shops restrict downloads to private usage including burning to CD or copying to other players # Most do not allow copies for third parties, forward, reselling, commercial usage # Few allow self-burned audio CDs to be handed for free to close friends and relatives -Reselling # Audio, video and software CDs and DVDs may be resold if all local copies are deleted # For downloads there is no clear jurisdiction yet -Return # No right of return for downloads as opposed to sealed audio, video, software media -Stolen players / computers # Watermarking my cause problems if players or computers are stolen and audio or video files marked as yours appear in file sharing applications, no court cases like that yet |
|
Magnetic stripes card
|
-Advantages: flexibility of cash and checks, security of checks, solvency of customer verified before payment is accepted
-Disadvantages: needs infrastructure, transaction cost -Card types 1) Debit card •Customer must have bank account associated with card •Transaction processed in real time 2) Charge card •Customer doesn’t need to pay immediately but only at end of monthly period •If he has bank account, it is debited automatically •Otherwise he needs to transfer money directly to card association 3) Credit card •Customer doesn’t need to pay immediately, not even at end of monthly period •Bank doesn’t count interest until end of monthly period 4) Smart card •Cards with chip such as EMV cards (Europay International, MasterCard and VISA) •Cooperate and student cards with payment functions |
|
Classification of (E)-Payment systems
|
-Pre-paid, pay-now, post-paid
-On-line, off-line |
|
General security requirements for payments
|
-Authorization
# Payment must always be authorized by payer (physical, PIN, digital signature) # Payment may also need to be authorized by bank -Data confidentiality and authenticity # Transaction data should be intact and authentic # External parties should not have access to data -Availability and reliability # Payment infrastructure should always be available # Centralized systems should be designed with care (critical components need replication and higher level of protection) -Atomicity of transactions # All or nothing principle: either whole transaction is executed successfully or state of system doesn’t change (must be possible to detect and recover from interruptions such as communication failures) -Privacy (anonymity and untraceability) # Customers should be able to control how their personal data is used by other parties # Sometimes best way to ensure that personal data will not be misused is to hide it •Anonymity: customer hides his identity (name, account number, etc.) from merchant •Untraceability: not even bank can keep track of which transactions customer is engaged in |
|
Credit card fraud
|
-Stolen cards
•Originally protected with help of hot card lists and limits # Merchant gets local hot card list + floor limit # Higher transaction need to be authorized by national call center or card issuer •Today many transactions authorized by credit card company directly (hot card system and limits still in use) •Merchant bears full risk of disputes -In early 1980’s it was possible to re-encode magnetic strip of any credit card with account number and expiry date of valid card •Sufficient for crooks to collect discarded receipts and re-encode a card with data printed on discarded receipt -In early 1990’s banks added Card verification values (CVVs) to magnetic strip •3-digit Message Authentication Code (MAC) computed over contents of magnetic stripe and keyed with key known to issuer of card only •CVV cannot be computed by crooks collecting receipts -Crooks moved to skimming •Tricking card holder into swiping his card through their own card reader that reads CVV together with rest -Use operating business to obtain valid credit card information (corrupt salesman) -Fake terminals and terminal tapping devices (use on ATM fraud) |
|
Using credit cards online
|
-Examples
•MOTO (Mail Order Telephone Order): Supports credit card payment on phone or fax •SSL/TLS: General secure transport connection between apps -Server authentication (or server and client) -Data confidentiality and integrity on transport layer between client and server •SET (Secure Electronic Transactions): Never became widely spread although would solve many risks of credit card use •Dominant today: SSL/TLS + variations -Risks of online use •Phishing credit card number with fake websites •Using spyware to log user input (log information) •Hacking into merchants computers (steal information) •Use of magnetic strip CVV not possible •No user authentication through SSL/TLS only (as users do not have certificates) --> motivation to develop SET -Credit card payment with SSL/TLS •User visits merchants web site and select goods •User fills out a form with his credit card details •Form data is sent to merchants server via SSL/TLS o Merchants server is authenticated o Transmitted data is encrypted •Merchant checks solvency of user •If satisfied, it ships the good to user •Clearing happens later using existing infrastructure deployed for non-online credit card based payments •Advantages # SSL/TLS is part of every browser and web server # Protects credit card number against eavesdropping •Disadvantages # Risk that credit card numbers are stolen from merchants computer (same as non-online) # Bank will learn all transactions user made # No PIN use, no Signature (no user authentication) # Merchant authentication via certificate --> phishing problems (fake website) -CVV2 used to enhance security |
|
Card Verification Value (CVV)
|
3-digit Message Authentication Code (MAC) computed over contents of magnetic stripe and keyed with key known to issuer of card only
|
|
Card Verification Value (CVV2)
|
•Security enhancement for online / over the phone use of credit cards
# 3 or 4 digit value printed on credit card # Generated by hashing card number and expiration date under a key known only to issuing bank # Different from CVC on magnetic stripe •Unlike credit card number, CVV2 is not to be stored by merchant for longer than authorization of transaction •Limitations: does not prevent phishing and interferes with periodical card payments |
|
PayPal
|
•Transfer money P2P online
•Funds stored in PayPal account •Payment possible via credit card, direct debit, PayPal deposit •Banking information not provided to merchants •Allows for money transfer to any registered PayPal email address |
|
Smart cards (EMV)
|
-EMV (Europay Mastercard Visa) standard deployed in Europe for PIN-protected credit and debit card transactions
-Three types of cards using different authentication mechanism 1) Static data authentication (SDA): Symmetric crypto on card only 2) Dynamic data authentication (DDA): Digital signatures generated on cards 3) Combined data authentication (CDA) |
|
Static data authentication (SDA)
|
•Symmetric crypto on card only
•Customer enters card into terminal •Card sends certificate of issuing bank, account number and other data + signature to terminal •Terminal verifies signature, merchant enters amount •Terminal solicits PIN, user enters PIN, terminal sends PIN to card •Card checks PIN and generates MAC (on merchant ID, amount, serial number, …) with symmetric key shared with bank •Terminal optionally (depending on amount) goes online to submit MAC, MAC checked by bank •Problems # Crooks use fake terminal that extract card information and allow using them to fake magnetic strip cards and use them # Terminals often tampered such that PIN and card information can be read (invisible from outside, internal by terminal) # Wiretapping between terminal and branch server |
|
Dynamic data authentication (DDA)
|
•Digital signatures generated on cards
•Each card has public/private RSA key pair •When card inserted in terminal # Terminal sends random challenge to card # Card signs random challenge and sends it back with certificate for its public key # Terminal checks signature and certificate # Terminal sends transaction data and PIN encrypted with public key to card •Everything else same as on SDA •Advantages # PIN not sent in clear between terminal and PIN pad # Terminal convinced that correct card is present |
|
Combined data authentication (CDA)
|
•As DDA but additionally MAC is signed with private card key (transaction data tied to private key and to successful PIN authentication)
•Problem # Wicked merchant could mount false front over payment terminal such that wrong amount would be displayed to customer (no way for user to trust terminal) |
|
General idea of SET
|
-Security Electronic Transfer
-Protocol designed to protect credit card transactions on Internet -Participants 1) Cardholder •Wants to buy something from merchant on Internet •Authorized holder of payment card issued by issuer (bank) 2) Merchant •Sells goods/services via Web site or by email •Has relationship with acquirer (bank) 3) Issuer •Issues payment cards •Responsible for payment of dept of cardholders 4) Acquirer •Maintains accounts for merchants •Processes payment card authorizations and payments •Transfers money to merchant account, reimbursed by issuer 5) Payment gateway •Interface between Internet and existing credit card payment network -Uses dual signature (message split in two parts for different receiver, hash of hash of both messages signed, XOR of two messages and signature appended to both message parts) -Protects against # Eavesdropping on credit card number transfers via Internet # Identity, account and credit card number kept secret from merchant # Order information kept secret from bank # Non-repudiation in both directions •User cannot deny having made order •Merchants cannot claim that order was received by client that did not originate from the client |
|
Services of SET
|
-Cardholder account authentication
•Merchant can verify that client is legitimate user of card •Based on certificates -Merchant authentication •Client can authenticate merchant and check if it is authorized to accept payment cards •Based on certificates -Confidentiality •Cardholder account and payment information (credit card number) is protected while it travels across network •Credit card number is hidden from merchant too -Integrity •Messages cannot be altered in transit in an undetectable way •Based on digital signatures |
|
Why SET failed
|
-Less benefits than expected
•Merchants like to collect credit card numbers (as indexes) •Optionally SET allows merchant to get credit card number from acquirer (--> security improvements of SET negated) -Too high costs •SET requires PKI and in particular issuing certificates to card holders •Alternative use if SSL/TLS protection of credit card transfer low cost and generally accepted -SET requires download and installation of special software and obtaining public-key certificate |
|
Basic principle of ATMs
|
-Card initialization
# Account Number (PAN) # PIN key (KP) # C=Enc_KP (PAN) decimalized # Natural PIN = first four digits of C # Offset -Offset used to allow customer to choose own PIN -First ATMs # Stored PIN key and operated offline # Each card contained offset and PAN on magnetic stripe # ATM computed PIN from PAN and PIN key, added offset and compared it with users input -Today # PIN random, PIN and secret PIN used as input to encryption function, hash of cipher text “PIN verification code” stored on card # ATM typically stores terminal master key # ATM can be operated in offline or online mode #Offline mode •PIN key encrypted with key derived from its ATMs master key and sent to ATM •ATM decrypts PIN key and can now check PINs entered by user into ATM # Online mode •PIN entered by customer is sent from ATM to central security module encrypted with key derived from terminals master key •PIN checked by central module # If cards of other banks are accepted, PINs are decrypted and re-encrypted with help of switches provided e.g. by VISA with which bank shares a secret key |
|
ATM problems
|
-Switches necessary for interoperation between banks as setting up symmetric keys between each pair of banks not possible
-Early ATMs and bank server counterparts used software encryption rather than hardware encryption # Keys needed to be available in software as well --> misuse by programmers and technical staff -Some early cards were vulnerable to “encryption replacement attacks” # No binding between account number and encrypted PIN on magnetic strip --> account number could be changed and card would still be accepted by ATM as encrypted PIN correctly decrypts to PIN entered by user -Simple processing errors: early ATMs resent transactions if they didn’t receive networks confirmation message --> double debits of customers -Theft from mail -Fraud through bank staff (repairmen installed laptop inside ATM, bank used same PIN key in tests and live system) -To counter offline problem many banks thought up check digit schemes to check PINs without PIN key being available --> observing one valid PIN sufficient to fool offline ATMs |
|
Online banking
|
-Goals
# Server authentication •All current web-browser based system use SSL/TLS handshake to authenticate server based on certificate # Client authentication (entity and transaction) •PIN/TAN, PIN/iTAN •Lists of one-time passwords •Chip cards •mTANs # Confidentiality and integrity of data transmission between client and server •SSL/TLS record layer protocol used -Problems # Authenticity of public key of bank # Users must check that connection to bank is secure to avoid phishing # Applets should be signed and signature should be checked by users |
|
PIN/TAN, PIN/iTAN
|
-Entity authentication typically based on ID and PIN
-IDs are sometimes randomly assigned, sometimes account numbers or social security numbers Transaction authentication numbers (TANs): -Originally: list of TANs provided to customer -Costumer could use TANs in any order -Problems # TANs can be intercepted and used by attacker if real transaction is interrupted # TAN can be used by attacker if entered on phishing site iTANs: -Numbered list of TANs provided to customer -Bank server asks for TAN with particular index for each transmission and binds this TAN to transaction -Advantage # TAN cannot be used for other transaction even if current one is interrupted # Requires phishing for TANs and using them to be at same time Problems: -Phishing -Registration security (bootstrapping) # User has to obtain home banking identity and initial password # Initial password typically sent per mail # Sometimes home banking account activated by additional phone call -Infected client platforms # Viruses, trojan horses and worms on client # Tamper with root certificates pre-installed in web browser # Steal users PINs # Mislead users in accessing fake websites |
|
mTANs
|
•Idea
-Use SMS as out-of-band authentic channel for TANs -Customer makes online money-transfer, sends it to bank -Receives mTAN + account number of receiver + transaction amount via SMS from bank -Enters mTAN into online banking website •mTAN considered more secure than iTANs as mTAN method not vulnerable even to online phishing (if user checks additional account number and amount) •Customer does not have to carry TAN list around if he’s on the move •Big problem on rise: mobile malware that intercepts mTANs and forwards them to attacker |
|
Eurograbber
|
Attack on account protected with mTAN
•Trojan installed on victims computer # Logs user name and password when user accesses online banking website # Prompts user to enter mobile phone number # Prompts user to complete security update by following steps sent to them via SMS # SMS sent to mobile phone number contains clickable link # Clicking on link installs mobile malware •Mobile malware on victims phone # Intercepts mTANs and forwards them to attacker # Ensures that mTAN SMS does not appear in victims inbox # Uses mTAN to authorize transactions the attacker initiated with help of intercepted username/password |
|
Risks of using PIN-based authorization only (case study in Norway)
|
-Shows that attacker can brute force online banking accounts that use PIN-based client authentication with success probability close to one
-Unfortunately the use of TANs was not very common at that time -Attack can be detected easily by intrusion detection system of bank if not distributed -SSNs = Social Security Numbers: used in many countries for identification purposes -Basic attack idea # Attacker has all online account IDs of bank # Attacker has set of possible PINs # Attacker picks account ID and randomly chooses PIN # Attacker repeats with new PIN # Attacker repeats with new account ID |
|
DigiCash
|
-Blind RSA signatures
# Banks public RSA key is (e, m), its private key is d # User U generates a coin (sn, exp, val) and computes its hash value h = H(sn, exp, val) # User U generates random number r (blinding factor), computes h * r^e mod m, and sends it to bank # Bank signs blinded coin by computing (h * r^e)^d = h^d * r mod m # When U receives blindly signed coin, he removes blinding: h^d * r * r^(-1) = h^d mod m # U obtained digital signature of bank on coin # Bank cannot link h^d * r and h^d together (r is random) -Problem: how much should user be charged? Bank signs blinded coin, does not know value of coin -Solution: bank can use different signing keys for different denominations -User must authenticate himself to bank when withdrawing money, so that bank can charge his account -Merchant must authenticate himself to bank when depositing money, so that bank can credit his account -Messages between user and vendor should be encrypted in order to prevent theft of money |
|
Micropayment Schemes
|
Motivation
Many transactions have a very low value Transaction costs of credit-card, check and cash based payments may be higher than transaction value Try to reduce losses of vendors and payers simultianeously |
|
PayWord
|
hash-chain based micropayment scheme
Check-like, credit based Uses public key crypto but very efficiently -User signs single message at the beginning -Message Authenticates all micropayments to same vendor that will follow Registration Phase: User provides Broker with bank account information, Shipping adress, public key Broker issues a certificate for User: Cert_u={B,U,addr_u,K_u,exp, more info}K_b^-1 Certificate is statement that gurarantees redemption of micro-payment tokens to any vendor if they are turned in before the expiration date Payment Phase: User contacts new vendor -> compute fresh payword chain w_n'w_n-1=h(w_n),w_n-2=h(w_n-1)=h^(2)(w_n),....,w_0=h^(n)(w_n) n is chosen by user w_n is picked at random User computes a commitment M={V,Cert_u,w_0,date, moreinfo}K_u^-1 Commitment authorizes Broker to pay Vendor for any of the paywords w1,..,wn that the Vendor hands in to the Broker before given date Paywords are vendor specific, they have no value to another vendor as the vendor's ID is included in the commitment M Payment Phase When Vendor receives w_i it can verify it by checking that it hashes to w_i-1 Hash function is pre-image resistant w_i+1 cannot be computed from w_i Vendor needs to store only the last received payword and its index Variable size payments can be supported by skipping the appropriate number of paywords Redemption Pahse At the end of each pay period, vendor redeems paywords for real money at the broker V sends B a redemption message that contains commitment M and the last keceived payword w_k Broker verifies commitment by checking users siganture, Checks that iteratively hashing w_k k times results in w_0 If satisfied, the broker pays the vendor k units and charges the account of U with same amount |
|
Bitcoin
|
Motivation
E-Cash without involving any trusted third party or bank Create a currency that is independent from any government Allow for anonymous payments Bitcoin Adresses Any user can have as many addresses as he pleases To generate an address, he generates: A public/private key pair suitable for ECDSA Address is hash of public key, generated with cryptographic hash function RIPEMD-160 The loss of private keys means a loss of all bit coins related to the address corresponding to the private key Transactions: Bitcoins only exist in the form of transactions Each transaction specifies one or more previous transactions as input Each transaction specifies one or more output addresses and the amount they are to receive A transaction is authorized by a signature with the private key correspoinding to the referenced output address of the referenced previous transaction The complete amount of Bitcoins available at the referenced output address of the previous transaction will be consumed in the transaction Owner can split transfer to transfer parts of the amount to an address owned by himself Combining Input several prior transactions can be combined -> Each previous transaction has to be authorized by a signature |
|
Bitcoin Wallet
|
The wallet of a client consists of public/private key pair he owns
Owning private key of an addres enables user to spent all bitcoins associated to address -> Highly attractive for information-harvesting malware |
|
Transaction Verification (Bitcoins)
|
Ensures that entity spending bitcoins is authorized
solves double spending issue (output of transaction may only be used once) Distributed Idea: use a proof of work to timestamp valid transactions Proof of work: dedicate computing time to solve a mathematically difficult problem Here: find a nonce that together with other data to be hased hases to a value that starts with a predefined number of zeros Easy to check if a nonce solves the problem |
|
Blocks (Bitcoins)
|
Collection of transactions
Three types of blocks -Blocks in the main chain of blocks (successfully verified) -Blocks in a side branch of the main chain -Orphan blocks (no previous blocks can be found) |
|
Merkle Root (BitCoins)
|
Combines hashes of all transactions included in the block to one single hash
|
|
Block Verification (BitCoins)
|
Each transaction is published in the network
Each participant collects newly incoming transacitons Checks for each transaction that: signature is valid sum of input amounts is at least as large as sum of output amounts Referenced output of previous transaction has not been used in any block in the main chain so far Collect verified transaction in a block Vary nonce in block until SHA-256 of block starts with as many leading zeros, as the current "difficulty" demands Include hash and nonce in the block and broadcast block Difficulti changes every 2016 blocks Adapts to how long it currently takes to verify blocks difficulty may go up or down Can be calculated by each client Is set such that verifying a block takes 10 min on average |
|
Mining (BitCoins)
|
Each BitCoin peer can participate in mining
Mining is part of transaction verification Proof for a block includes a base coin transaction Tx0 Tx0 transfers the current mining value to one of the users addresses and thus newly create bitcoins Mining value changes every 210.000 blocks Started with 50BTC halves every 210.00 blocks -> max of 21 mio bitcoins |
|
Transaction Fee (Bitcoins)
|
Each transaction can have a transaction fee
Amount of transaction fee is difference between sum of previous transactions and sum of output values -> Block verification will continue after mining Incentive will then be the sum of the fees of all transactions in the block |
|
How does Block Verification solve double spending
|
If same bitcoins were used in two transactions:
They'd either end up in the same block -> direct detection Or one of these transactions makes it into a block in the main chain first -> the other one would not be included in any following block A peer also cannot delete his prior transactions from a block He would have to redo the work to verify manipulated block Then in addition overtake the non-manipulated block chain Fast transactions: A payee can only be sure that transaction is not in the main chain if block his transaction is in is buried under several blocks 6 blocks are considered sufficient to ensure block really made it to main chain Each verification takes 10 min -> 60 min to be sure -> Bitcoins are not very well-suited for fast transactions |
|
Privacy on Public networks
|
Internet designed as public network: Machines on your lan may see traffic, routers see passing traffic
Routing information is public Encryption does not hide all identities Encryption hides payload but not routing information Even IP-level encryption (tunnel-mode IPsec/ESP) reveals IP addresses of IPsec gateway |
|
Anonymity
|
hiding who performed a given action
|
|
Untraceability
|
making difficult for an adversary to identify that given set of actions were performed by same subject
|
|
Unlinkability
|
Hiding information about the relationships between ay items
|
|
Unobservability
|
hiding the items themselves (e.g. hide the fact that any message was sent at all
|
|
Pseudonymity
|
making use of a pseudonym instead of the real identity
|
|
Attacks on Anonymity
|
Passive traffic analysis (e.g. who is talking to whom)
Active traffic analysis (inject packets or put a timing signature on packet flow) Compromise of network nodes Not obvious which node was compromised -> don't trust individual node |
|
Broadcast (Anonymous Communication)
|
Receiver anonymity
Idea: Broadcast encrypted message that only intended receiver can decrypt Disadvantages: Requires decrypting all messages computatinally unaffordable in most scenarios |
|
Mix (Anonymous Communication)
|
Mix is a node that
receives encrypted messages decrypts messages and learns a new address and encapsulated message collects messages (until threhold number/time) Forward collected messages in random order to new learned addresses Sender is anonymous with respect to receiver Eavesdropping on incoming/outgoing traffic learns identities of all receivers but cant link to senders Problem of single mix: Mix can link senders and receivers Attacker that injects fraffic and monitors incoming and outgoing traffic may be able to link sender receiver -> Mix Cascades Also Pad and buffer traffic to foil correlation (e.g. because of length) More mixes mean lower performance but higher security Typical compromise: 3 mixes To reply to a message: Anonymous return addresses |
|
Onion Routing
|
Real-time bi-directional
Connections are application independent Initiator randomly selects intermediate routers Routers are under different administrative controls Use hybrid encryption Initiator anonymity with respect to receiver is optional Emphasis: unlinkability of initiator and responder with respect to third parties or compromised routers Three Phases: Connection setup -Initiator creates onion -onion defines path of connection -Onion routers along path know ownly next and previous hop Data movement Connection tear-down The onion routing network is accessed via a series of proxies An initiating application makes a socket connection to an application proxy The application proxy maps messages to a generic format that can be passed through the onion routing network The application proxy connects to an onion proxy, which defines a route through the onion routing network |
|
Dining Cryptographers (DC)
|
Idea to make message public but untraceable
-> Anonymity for sender But difficult (in group of N need N random bits to send 1 bit) This idea generalizes to any group of size N For each bit of the message, every user generates 1 random bit and sends it to 1 neighbor Every user learns 2 bits (his own and his neighbor’s) Each user announces own bit XOR neighbor’s bit Sender announces own bit XOR neighbor’s bit XOR message bit XOR of all announcements = message bit Every randomly generated bit occurs in this sum twice (and is canceled by XOR), message bit occurs once Requires pairwise secure channels massive communication overhead |
|
Three-Person DC Protocol
|
-Three cryptographers are having dinner
-Either NSA is paying for the dinner, or one of them is paying, but wishes to remain anonymous -Goal: find out if the NSA was paying but do not reveal which one of the cryptographers was paying if it was not the NSA -Solution Each cryptographer flips a coin and shows it to his left neighbor. Each cryptographer will see two coins: his own and his right neighbor’s Each cryptographer announces whether the two coins are the same. If he is the payer, he lies (says the opposite). Odd number of “same” NSA is paying; even number of “same” one of them is paying But a non-payer cannot tell which of the other two is paying! Three-Person DC Protocol 26 |
|
Tor
|
Second-generation onion routing network
Specifically designed for low-latency Mainly supports sender anonymity Apply directory servers that are considered more trusted nodes Adds integrity protection Adds rendezvous points and hidden services Responder anonymity via location protected servers Each onion router maintains TLS connection to any other onion router Each onion router maintains several public/private key pairs: long-term identity key pair to sign certificates short-term onion key pair used to encrypt epemeral key part of client during circuit establishment (changed once a day) Many applications can share one circuit Tor routers dont need root priviliges (encourage people to set up own routers -> more anonymity) Directory servers: Maintain list of active onion routers, locations, public keys etc. Control how new routers join the network |
|
Sybil attack (onion Routing / Tor)
|
Attacker creates a large number of routers to get compromised connected routers
|
|
Hidden Services
|
Server on internet that anyone can connect without knowing where it is and who it runs
Accessible from anywhere Resistant to censorship Survives flooding attacks Resistant to physical attack Idea: Introduction points (Information is provided via directory service) Client selects rendezvous point and tells server to meet him there Connections use TOR circuits |
|
Attacker Types
|
Global vs locas attackers (based on controll of all or just some communication)
Active vs passive (monitoring, extracting vs. inserting, manipulating) Internal vs External (part / not part of anonymous network) |
|
Attacks against TOR
|
Routing attack:
Exploits preferential routing (better routers have higher probabilities) Attacker can setup "preferred router" to increase probability of being chosen Information about resources of a TOR node is only provided by node itself -> low-resource nodes can also be used Path information of compromised routers can be linked -> link sender, receiver Cell counter attack Entry and exit nodes can delay cells sent out If attacker has compromised entry and exit node he can link sender and receiver by delaying sending of the cells Chapter 14 p 57,58 Website fingerprinting Idea: Attacker can learn identity (e.g. URL, of website) by comparing observed traffic to library of previously recorded fingerprints for this website Fingerprint is constructed by exploiting distinct structure and size of HTML pages and included scripts, styles, ... Has been succesfull over Open SSH and SSL tunnels also works on Tor |
|
Crowds System
|
Routersform random path whenestablishing connection
Connections between routers are symmetrically encrypted After receiving a message, router flips a biased coin to decide if randomly select next router or send directly to recipient User is represented by a process called jondo Jondo contacts a server called blender wich returns current membership of the crowd and the contact information Jondo picks another jondo from the crowd and uses it as first crowd router |
|
Predecessor Attack
|
Against Crowds
Number of attackers may join crowd and wait for paths to be reformed Each attaker can log predecessor Initiator of a path is far more likely than other node to appear on the path, attacker will log initiator more often than other nodes After many numbers of reformation identity of initiator will become clear |
|
Is Anonymity a Market
|
Mass market of end users has not been reached yet
To much disadvantages: Advantages are unclear to user Browsing is slowed down Still high complexity of systems Inability to observe or demonstrate that communication is anonymous More TOR users since NSA |
|
Tor stinks
|
Name of top secret presentation revealed by Snowdem
States that NSA has problems deanonymizing Tor users Sates that they therefore try to infiltrate computers of Tor users with the help of malware |
|
Multi Party Computation
|
Two parties that trust each other want to communicate over an insecure channel
Multiple users that do not trust each other want to compute something in a distributed fashion while keeping the inputs private Example: Elections without trusted third party Auctions (each bidder makes an offer, who won? without showing each offer) Distributed Data Mining (Companies want to compare data without revealing it) Private Database Access(Evaluate query on database without revealing query to database owner) Main Idea: Achieve computations without trusted third party Correctness (honest parties should receive correct result) Security (Corrupt parties should learn no more from protocol than in ideal model) Problems Some of the participants may be corrupt Semi-Honest (Party follows protocol but tries to learn more from received messages) Malicious (Deviates from Protocol in arbitrary ways, lies about inputs, may quit at any point, may refuse to participate) |
|
Millionaire Problem
|
Two milionaires wish to know which one of them is richer but do not want to reveal any more information to the other milionaire about their wealth
|
|
Real world secure multi party vs. ideal-world
|
Corrupted participants view of computation should be equal to ideal world view.
|
|
RFID
|
Radio-Frequency Identification Tag
Tags were originally meant to store ID of a tag only Today RFID tags often store more than just ID RFID System typically consist of RFID-Tag RFID-Readers that read identifiers from tags and are not connected backend systems A backend system stores, collects, processes information corresponding to tags Power Sources: Passive (Tags are inactive until readers interrogation signal "wakes" them up -> cheap but short range only) Active (On-Board battery, can initiate communication used e.g. in transportation attached to containers) Capabilities Little memory (static 64- or 128 bit identifier) Little computational power (A few thousand gates at most, static keys for read/write access control) Not enough resources to support all public- or symmetric-key cryptography mechanisms Today AES on RFID considered feasible ECDSA signature successfully implemented on fairly low-cost chips Random number generator feasible |
|
RFID vs .Barcode
|
RFID no line of sight required
RFID readers item orientation doesn't matter RFID Enables automated scanning without unpacking RFID enables scanning hundreds of items per second RFID with Electronic Product Code increases payload size -> Unique serial number instead of item brand/type Tracking of every single item possible |
|
RFID Examples
|
Payment in Cafeterias
Public transport passes (MIFARE) and road tolls Water park entry pass Animal implants (Cat door that opens up only for your cat Human implants (e.g. VIP clubs) Car Keys Keeping inventory (more cost efficient than barcodes, faster scanning, less personnel) Resistance to forgery (e.g. Electronic passport, Fifa world cup tickets) Theft prevention Baggage systems on airports Controlling production lines (which pieces are where, which belong together) Future: RFID-aware household devices |
|
RFID Cost
|
Application dependent
Low cost tags 5ct Reader 200€ Example deployment on library 100.000 bookds 50.000€ 0.36ct per book 12.500 borrow station, 10.000 detection portal |
|
Privacy Problems (RFID)
|
No certainty for the consumer about…
Which items contain tags What is stored on a tag by whom Who can read tags Which readers are networked / interact with the same backend If tags are unique If tags are/can be deactivated/reactivated Tag duplication Unauthorized tracking of objects Relay attacks: Attacker impersonates tag in front of the door Impersonates reader in the possibly far away location of victim tag Relays traffic between his tag and reader Low cost tags can typically be destroyed by anyone Radio channel can be blocked by anyone |
|
Privacy Solutions (RFID)
|
Physical shielding (e.g. Wrapping in tin foil)
Rename tags at store checkout (ID still unique) Kill tags at store checkout (denies benefits return, repair, ...) Use Cryptography to protect ID from "unauthorized" readers -> more computation on tag -> costly Requires key management key infrastructure -> really costly Range for Rogue readers 50-100 cm Approaches: "kill" command "sleep" command renaming blocking Crypto-enabled tags: Hash locks Hopper Blum Public key approaches |
|
Blocking (RFID)
|
Binary tree walking is used to determin which tags are present
IDs are leaves of a binary tree Depth first search in the tree Blocking idea: Use blocking tag that emits 0 an 1 whenever a reader queries for the next bit -> Readers will think all tags are present Problem: Authorized reads are blocked as well Blocker devices of one user will block tags of other users as well -> Privacy zone Upon purchase of product its tag is transferred into privacy zone by setting leading bit Blocker tag (device) simulates collision if reader query starts with 1 (example case) |
|
Lightweight Crypto For RFID
|
RSA/AES/DES/ECDSA typically assumed to be to costly for low-cost tags
However, ECDSA and one-side authenticated EC-DH have successfully been implemented on quite low-cost tags ~ 1 Euro True random number generators on tags possible by analog-digital-conversion done in the RF interface Hash functions barely possible on small tags Nevertheless many cryptographic protocols developed for RFID tags are based on hash functions |
|
Hash Locks
|
Reader-to-tag authentication
Meant to prevent reading by unauthorized readers Cheap to implement Tag has to store the metaID and implement a hash function Security based on weak collision-resistance of hash function metaID looks random Problem: Tag always responds with same meta ID Real ID not protected against eavesdropping Authorized reader always uses "key" to indicates its authorization Improvement: Randomized Hash Locks Needs pseudo-random number generator Tag responds different every time Reader must perform brute-force ID search |
|
Hopper and Blum Protocol
|
Based on Learning Parity in the Presence of Noise Problem
Reader stores list of tag IDs and secrets x and runs HB protocol without knowing the ID of the tag Reader exhaustively searches through list of tags for a tag for which #of incorrect responses in the r-rounds of HB protocol < m rb Active attacs cannot be excluded -> HB+ protocol |
|
Active Attack Against Hopper and Blum
|
|
|
HB+ Protocol
|
Still attackable if attacker in the middle can observe authentication decision made by reader
Requires attacker to be able to place himself in the middle between reader and tag |