Tài liệu miễn phí Tổ chức sự kiện
Download Tài liệu học tập miễn phí Tổ chức sự kiện
As a reaction to this complexity, we designed a new
abstraction that allows us to express the simple computa-
tions we were trying to perform but hides the messy de-
tails of parallelization, fault-tolerance, data distribution
and load balancing in a library. Our abstraction is in-
spired by the map and reduce primitives present in Lisp
and many other functional languages. We realized that
most of our computations involved applying a map op-
eration to each logical “record” in our input in order to
compute a set of intermediate key/value pairs, and then
applying a reduce operation to all the values that shared
the same key, in order to combine the...
8/30/2018 2:40:28 AM +00:00
Programs written in this functional style are automati-
cally parallelized and executed on a large cluster of com-
modity machines. The run-time system takes care of the
details of partitioning the input data, scheduling the pro-
gram’s execution across a set of machines, handling ma-
chine failures, and managing the required inter-machine
communication. This allows programmers without any
experience with parallel and distributed systems to eas-
ily utilize the resources of a large distributed system.
Our implementation of MapReduce runs on a large
cluster of commodity machines and is highly scalable:
a typical MapReduce computation processes many ter-
abytes of data on thousands of machines. Programmers
find the system easy to use: hundreds...
8/30/2018 2:40:28 AM +00:00
The computation takes a set of input key/value pairs, and
produces a set of output key/value pairs. The user of
theMapReduce library expresses the computation as two
functions: Map and Reduce.
Map, written by the user, takes an input pair and pro-
duces a set of intermediate key/value pairs. The MapRe-
duce library groups together all intermediate values asso-
ciated with the same intermediate key I and passes them
to the Reduce function.
The Reduce function, also written by the user, accepts
an intermediate key I and a set of values for that key. It
merges together these values to form a possibly smaller
set of values. Typically just zero or one...
8/30/2018 2:40:28 AM +00:00
A clickbot is a software robot that clicks on ads (issues HTTP
requests for advertiser web pages) to help an attacker conduct
click fraud. Some clickbots can be purchased, while others
are malware that spread as such and are part of larger botnets.
Malware-type clickbots can receive instructions from a botmas-
ter server as to what ads to click, and how often and when to
click them.
There are many types of clickbots used on the Internet. Some
are “for-sale” clickbots, while others are malware. For-sale
clickbots such as the Lote Clicking Agent, I-Faker, FakeZilla,
and Clickmaster can be purchased online. They typically use
anonymous proxies to generate traffic with IP...
8/30/2018 2:40:28 AM +00:00
Malware-type clickbots infect machines in order to achieve
IP diversity, and their traffic may or may not be as easily iden-
tifiable as that generated by for-sale clickbots. Clickbot.A is
a malware-type clickbot, and is identified as a trojan by some
anti-virus packages. The result of a VirusTOTAL scan, which
runs various anti-virus scanners, on the Clickbot.A binary pro-
duced the results shown in Table 1 in the Appendix, as noted
by SANS handler Swa Frantzen [2].
Many of the popular virus scanners includingMcAfee, Sophos,
and Symantec did not detect that Clickbot.A was malicious,
and some of those that did detect it only did so because it used
a common Trojan...
8/30/2018 2:40:28 AM +00:00
This paper presents a detailed case study of the Clickbot.A
botnet. The botnet consisted of over 100,000 machines and ex-
hibited some novel characteristics while also taking advantage
of some characteristics of existing, well-known botnets. One of
the most novel characteristics of the clickbot is that it was built
to conduct a low-noise click fraud attack against syndicated
search engines.
This paper focuses on describing the novel aspects of the
Clickbot.A botnet, and describes parts of our experience in in-
vestigating it. For instance, we describe how syndicated search
engines work, and how Clickbot.A attacked such search en-
gines.
We believe that it is important to disclose the details of how
such botnets...
8/30/2018 2:40:28 AM +00:00
This paper develops a minimally supervised
approach, based on focused distributional sim-
ilarity methods and discourse connectives,
for identifying of causality relations between
events in context. While it has been shown
that distributional similarity can help identify-
ing causality, we observe that discourse con-
nectives and the particular discourse relation
they evoke in context provide additional in-
formation towards determining causality be-
tween events. We show that combining dis-
course relation predictions and distributional
similarity methods in a global inference pro-
cedure provides additional improvements to-
wards determining event causality....
8/30/2018 2:40:28 AM +00:00
Consider typicalWSN applications involving the reliable de-
tection and/or estimation of event features based on the collec-
tive reports of several sensor nodes observing the event. Let us
assume that for reliable temporal tracking, the sink must de-
cide on the event features every time units. Here, represents
the duration of a decision interval and is fixed by the applica-
tion. At the end of each decision interval, the sink decides based
on reports received from sensor nodes during that interval. The
specifics of such a decision making process are application de-
pendent and beyond the scope of our paper.
The least we can assume is that the sink derives...
8/30/2018 2:40:28 AM +00:00
We measure the reliable transport of event features from
source nodes to the sink in terms of the number of received
data packets. Regardless of any application-specific metric that
may actually be used, the number of received data packets is
closely related to the amount of information acquired by the
sink for the detection and extraction of event features. Hence,
this serves as a simple but adequate event reliability measure at
the transport level. The observed and desired event reliabilities
are now defined as follows:
Definition 1: The observed event reliability, , is the number
of received data packets in decision interval at the sink.
Definition 2: The desired event reliability,...
8/30/2018 2:40:28 AM +00:00
Official’s blocks and paddles shall be utilized on each mat. An
announcer and assistant announcer shall be utilized. Contestants shall be
directed by the announcer to report directly to a specific mat. The announcer
should read the age division and weight class first, next the names of the athletes
and their state/club, then the mat number, and finally the names and mat number
should be repeated. It is suggested that if semi-finals are conducted, they shall
be run in flights by weight class following the conclusion of all pool finals. Medal
matches shall be run in...
8/30/2018 2:40:28 AM +00:00
To insure security and successful ticket sales, the Event
Director and Event Coordinator shall work cooperatively in deciding a
pass/credential list for the event. Credentialed personnel should include but not
be limited to security, VIP’s and sponsors, media, volunteers, officials, medical,
photographers, and USAW staff. Please be aware that coaches with a current
USA Wrestling Coaches Card and who are NCEP Certified are entitled to free
admission to USAW events and should be credentialed as so. ...
8/30/2018 2:40:28 AM +00:00
Critical business systems and their associated technologies are typically held to performance
benchmarks. In the security space, benchmarks of speed, capacity and accuracy are common
for encryption, packet inspection, assessment, alerting and other critical protection technolo-
gies. But how do you set benchmarks for a tool based on collection, normalization and corre-
lation of security events from multiple logging devices? And how do you apply these bench-
marks to today’s diverse network environments? ...
8/30/2018 2:40:28 AM +00:00
This is the problem with benchmarking Security Information Event Management (SIEM) sys-
tems, which collect security events from one to thousands of devices, each with its own differ-
ent log data format. If we take every conceivable environment into consideration, it is impossi-
ble to benchmark SIEM systems. We can, however, set one baseline environment against which
to benchmark and then include equations so that organizations can extrapolate their own
benchmark requirements. That is the approach of this paper....
8/30/2018 2:40:28 AM +00:00
Consider that network and application firewalls, network and host Intrusion Detection/Preven-
tion (IDS/IPS), access controls, sniffers, and Unified Threat Management systems (UTM)—all log
security events that must be monitored. Every switch, router, load balancer, operating system,
server, badge reader, custom or legacy application, and many other IT systems across the enter-
prise, produce logs of security events, along with every new system to follow (such as virtual-
ization). Most have their own log expression formats. Some systems, like legacy applications,
don’t produce logs at all. ...
8/30/2018 2:40:28 AM +00:00
First we must determine what is important. Do we need all log data from every critical system
in order to perform security, response, and audit? Will we need all that data at lightning speed?
(Most likely, we will not.) How much data can the network and collection tool actually handle
under load? What is the threshold before networks bottleneck and/or the SIEM is rendered
unusable, not unlike a denial of service (DOS)? These are variables that every organization
must consider as they hold SIEM to standards that best suit their operational goals....
8/30/2018 2:40:28 AM +00:00
Why is benchmarking SIEM important? According to the National Institute of Standards (NIST),
SIEM software is a relatively new type of centralized logging software compared to syslog. Our
SANS Log Management Survey1
shows 51 percent of respondents ranked collecting logs as their
most critical challenge – and collecting logs is a basic feature a SIEM system can provide. Further,
a recent NetworkWorld article2
explains how different SIEM products typically integrate well with
selected logging tools, but not with all tools. This is due to the disparity between logging and
reporting formats from different systems. There...
8/30/2018 2:40:28 AM +00:00
Event performance characteristics provide a metric against which most enterprises can judge
a SIEM system. The true value of a SIEM platform, however, will be in terms of Mean Time To
Remediate (MTTR) or other metrics that can show the ability of rapid incident response to miti-
gate risk and minimize operational and financial impact. In our second set of benchmarks for
storage and analysis, we have addressed the ability of SIEM to react within a reasonable MTTR
rate to incidents that require automatic or manual intervention....
8/30/2018 2:40:28 AM +00:00
Because this document is a benchmark, it does not cover the important requirements that
cannot be benchmarked, such as requirements for integration with existing systems (agent vs.
agent-less, transport mechanism, ports and protocols, interface with change control, usability
of user interface, storage type, integration with physical security systems, etc.). Other require-
ments that organizations should consider but aren’t benchmarked include the ability to process
connection-specific flow data from network elements, which can be used to further enhance
forensic and root-cause analysis. ...
8/30/2018 2:40:28 AM +00:00
The matrices that follow are designed as guidelines to assist readers in setting their own bench-
mark requirements for SIEM system testing. While this is a benchmark checklist, readers must
remember that benchmarking, itself, is governed by variables specific to each organization. For
a real-life example, consider an article in eSecurity Planet, in which Aurora Health in Michigan
estimated that they produced 5,000–10,000 EPS, depending upon the time of day.
4
We assume
that means during the normal ebb and flow of network traffic. What would that load look like
if it were under attack? How many security...
8/30/2018 2:40:28 AM +00:00
Speed of hardware, NICs (network interface cards), operating systems, logging configurations,
network bandwidth, load balancing and many other factors must also go into benchmark
requirements. One may have two identical server environments with two very different EPS
requirements due to any or all of these and other variables. With consideration of these vari-
ables, EPS can be established for normal and peak usage times. We developed the equations
included here, therefore, to determine Peak Events (PE) per second and to establish normal
usage by exchanging the PEx
for NEx
(Normal Events per second). ...
8/30/2018 2:40:28 AM +00:00
Once these computations are complete, the resulting Peak EPS set of numbers will reflect that
grand, but impractical, peak total mentioned above. Again, it is unlikely that all devices will ever
simultaneously produce log events at maximum rate. Seek consultation from SMEs and the
system engineers provided by the vendor in order to establish a realistic Peak EPS that the SIEM
system must be able to handle, then set filters for getting required event information through
to SIEM analysis, should an overload occur. ...
8/30/2018 2:40:28 AM +00:00
Once the topography is defined, the next stage is to average EPS collected from these devices
during normal and peak periods. Remember that demanding all log data at the highest speed
24x7 could, in itself, become problematic, causing a potential DOS situation with network or
SIEM system overload. So realistic speeds based on networking and SIEM product restrictions
must also be considered in the baseline.
8/30/2018 2:40:28 AM +00:00
Protocols and data sources present other variables considered determining average and peak
load requirements. In terms of effect on EPS rates, our experience is that systems using UDP can
generate more events more quickly, but this creates a higher load for the management tool,
which actually slows collection and correlation when compared to TCP. One of our reviewing
analysts has seen UDP packets dropped at 3,000 EPS, while TCP could maintain a 100,000 EPS
load. It’s also been our experience that use of both protocols in single environment....
8/30/2018 2:40:28 AM +00:00
Now that we have said so much about EPS, it is important to note that no one
ever analyzes a single second’s worth of data. An EPS rating is simply designed as
a guideline to be used for evaluation, planning and comparison. When design-
ing a SIEM system, one must also consider the volume of data that may be ana-
lyzed for a single incident. If an organization collects an average of 20,000 EPS
over eight hours of an ongoing incident, that will require sorting and analysis of
576,000,000 data records. Using a 300 byte average size, that...
8/30/2018 2:40:28 AM +00:00
To accomplish this, a reliable transportmechanismis required
in addition to robust modulation and media access, link error
control and fault tolerant routing. The functionalities and design
of a suitable transport solution for WSN are the main issues
addressed in this paper.
The need for a transport layer for data delivery in WSN was
questioned in a recent work [12] under the premise that data
flows from source to sink are generally loss tolerant. While the
need for end-to-end reliability may not exist due to the sheer
amount of correlated data flows, an event in the sensor field
needs to be tracked with a certain accuracy at the sink. Hence,
unlike traditional...
8/30/2018 2:40:28 AM +00:00
Of the current attacks on Web applications, those based
on script injection are by far the most prominent. For ex-
ample, script injection is used in cross-site scripting [1]
and Web application worms [2, 24].
A script injection vulnerability may be present when-
ever a Web application includes data of uncertain origin
in its Web pages; a third-party comment on a blog page
is an example of such untrusted data. In a typical attack,
malicious data with surreptitiously embedded scripts is
included in requests to a benign Web application server;
later, the server may include that data, and those scripts,
inWeb pages it returns to unsuspecting users. SinceWeb
browsers execute scripts on...
8/30/2018 2:40:28 AM +00:00
Script injection attacks typically affect non-malicious
users and succeed without compromising Web applica-
tion servers or networks. For example, in 2005, the self-
propagating Samy worm on MySpace used script injec-
tion to infect over a million users [24]. As a MySpace
user viewed the MySpace page of another, infected user,
the worm script would execute and send a page update
request to the server, causing the worm script to be in-
cluded also on the viewing user’s page.
In an attempt to prevent script injection, mostWeb ap-
plication servers try to carefully filter out scripts from
untrusted data. Unfortunately, such data sanitization is
highly error prone (see Section 2.1). For example,...
8/30/2018 2:40:28 AM +00:00
Web applications provide end users with client access to
server functionality through a set of Web pages. These
pages often contain script code to be executed dynami-
cally within the client Web browser.
Most Web applications aim to enforce simple, intu-
itive security policies, such as, forWeb-based email, dis-
allowing any scripts in untrusted email messages. Even
so, Web applications are currently subject to a plethora
of successful attacks, such as cross-site scripting, cookie
theft, session riding, browser hijacking, and the recent
self-propagating worms in Web-based email and social
networking sites [2, 17, 24]. Indeed, according to sur-
veys, security issues in Web applications are the most
commonly reported vulnerabilities on the Internet...
8/30/2018 2:40:28 AM +00:00
Web applications must consider the possibility of mali-
cious attackers that craft arbitrary messages, and counter
this threat through server-side mechanisms.
However, to date, Web application development has
focused only on methodologies and tools for server-side
security enforcement (for instance, see [11, 13]). At
most, non-malicious Web clients have been assumed to
enforce a rudimentary “same origin” security policy [22].
Web clients are not even informed of simple Web appli-
cation invariants, such as “no scripts in the email mes-
sage portion of a page”, since clients are not trusted to
enforce security policies.
This focus on centralized server-side security mecha-
nisms is shortsighted: server-side enforcement has diffi-
culties constraining even simple client behavior....
8/30/2018 2:40:28 AM +00:00
At the same time, the majority of users are not ma-
licious, and would enable client-side enforcement to
avoid exploits such as cross-site scripting andWeb-based
worms. Even if only benign users with enhanced clients
might perform security enforcement, those users would
be protected, and all users would benefit from fewer at-
tacks on the Web application.
Unfortunately, there are many obstacles to the adop-
tion of new, enhanced security mechanisms in popular
Web browsers. Even when such enhancements are prac-
tical and easy to implement, they may not be deployed
widely. Therefore, to increase its chance of widespread
adoption, a Web client security mechanism should be
practical, simple, and flexible, and be able...
8/30/2018 2:40:28 AM +00:00