How has the Internet changed the way we communicate?

The Internet has increased options for lateral communication and reduced the need for mediators; creating different types of relationships, and increasing possibilities for major social change.

The Internet has grown so rapidly and become indispensable so quickly that it can be hard to remember that it is still a fairly new medium. Most Americans use the Internet daily for personal and increasingly for business purposes. Children are growing up with online activities as a second nature. Even in rural and lower income communities, opportunities for Internet usage are spreading.

According to an analysis by eMarketer, in 2010 the average US customer spent 11 hours per day interacting with major media, and a ComScore survey found that in January 2012 the average US Internet user spent 36 hours online during the month. These figures reflect the growing importance of digital media to Americans’ daily lives. A Pew report sought to decide whether increasing Internet usage would make individuals more or less intelligent. About 42% of the respondents felt that individuals would be affected negatively with a “thirst for gratification and quick fixes”, while 55% felt that Internet users were “learning more and they are adept at finding answers to deep questions, in part because they can search effectively and access collective intelligence via the Internet.”

Social media is having a huge influence on changes in the way people communicate today. The explosion of options include blogs in which any person can choose to write about any topic, and collect followers who are interested in what they have to say, and microblogs like the immensely popular Twitter, in which users can write any thoughts they wish to share within a 140 character limit. Wikis such as Wikipedia are information sites in which the content is posted and updated by the users, rather than any given organization. Multimedia sites like YouTube are also hugely popular; in this case any individual can choose to post videos on just about any topic they choose, to be shared with any user in the world. Finally, social networks such as Facebook have provided completely new options for people to socialize, by creating individual profiles that can be used as a digital scrapbook, photo album, journal, and communications portal. Users can send each other public messages, private messages, or instant messages, and can even group their “friends” into different levels depending upon the amount of information the user wants to share with that person. Companies, schools, charities, and other organizations have joined in by creating pages that users can “like” or “recommend”.

Another major change has been in the way news is communicated to users. The advent of 24 hour news programs, first on television and now on any number of web sites online, has changed in some ways the very content of news programs. As there may not be sufficient “hard” news of interest to fill all of these media sites 24 hours a day, there is repetition, fluff, and increasingly commentary disguised as news. Some people feel that news outlets may be conspiring to highlight certain stories that get played over and over, while giving little time to competing stories that certain corporations or political organizations want to bury, in this way swaying public opinion. Sometimes in order to gain a full understanding of events, it is necessary to obtain news from a number of sources.

Perhaps one of the most important results of the communications options available via the Internet and increasingly mobile devices that have Internet capabilities, is the increasing use of lateral communication. Benjamin Barber discusses the affect new media is having on democracy, in that “the Net offers a useful alternative to elite mass communication in that it permits ordinary citizens to communicate directly around the world without the mediation of elites.” Barber states:

Integrated systems of computers and the world wide web are ‘point-to-point’ technologies that promise direct lateral communication among all participants and thus offer an unmediated horizontal access (“immediacy”), and entail the elimination of overseers and middlemen, of facilitators and editors, of and hierarchical, busy-body gatekeepers. The virtue of immediacy is that it facilitates equality and egalitarian forms of horizontal communication. Representative democracy favors vertical communication between “elites and masses,” but strong democracy (as I argued in my book of that name fifteen years ago) prefers lateral communication among citizens, who take precedence over leaders and representatives.

New media on the Internet have increasingly been used to bring groups of like-minded people together, sometimes resulting in major social changes. Members of the Occupy Wall Street and similar groups have used emails, instant messages, social media, and multimedia sites to communicate plans and to disseminate photographs and videos taken using their mobile technology to group members and the public, in some cases avoiding police in order to continue their action, or getting evidence of their struggles to the public in spite of traditional media being prohibited from the area.

In the past year, even more dramatic examples of the power of communicating via the Internet were seen in what has been called the “Arab Spring”; protests that took place in countries such as Egypt and Lybia in which the citizens successfully demonstrated and committed civil resistance to oppressive regimes. In spite of governmental attempts to censor the Internet and keep organized media from filming the protests, social media was successfully used to organize protesters and get their stories out to the watching world.

Even fifteen years ago, few imagined the great changes that were ahead. Then, when communicating with relatives overseas, or when traveling overseas and trying to keep in touch with people at home, travelers were required to make expensive phone calls, often with difficulty if calling from a “third world” country. Slowly, Internet cafes became available around the world, often even in small villages, though often the service was spotty. Now we can take our mobile telephones with Internet access and call home from camel-back in India.

There is much concern over proposed legislation such as SOPA (Stop Online Piracy Act) and PIPA (Protect IP) which were introduced as methods of reducing theft of intellectual property, but it was widely feared that the legislation would instead result in loss of free speech and innovation, and censorship as powerful organizations could arbitrarily have websites blocked with no recourse. In a powerful protest of these types of legislation, on January 18, 2012 many websites such as Wikipedia engaged in a service blackout, while other sites such as Google placed a banner inviting individuals to sign a petition against the legislation. Senator Ron Wyden wrote an open letter on the World Wide Web to “Innovators, Speakers, Thinkers, and Agents for Change” discussing his reservations about the legislation and applauding the action. Wyden stated,

The Internet has become an integral part of everyday life precisely because it has been an open-to-all land of opportunity where entrepreneurs, thinkers and innovators are free to try, fail, and then try again. The Internet has changed the way we communicate with each other, the way we learn about the world and the way we conduct business. It has done this by eliminating the tollgates, middlemen, and other barriers to entry…. It has created a world where ideas, products and creative expression have an opportunity regardless of who offers them or where they originate.

I think that Ron Wyden’s letter itself is a prime example of one of the new ways we are communicating using the Internet.

IPS (Intrusion Prevention Systems)

Intrusion Prevention Systems, or IPS, are systems made to detect unauthorized intrusions into a network, and take action to stop the intrusion. There are two main types of IPSs: the first is network-based in which a device called a sensor runs an operating system that monitors network packets on certain circuits and reports intrusions to an IPS management console. The other type is host-based IPS which uses a software package installed on a host or server and monitors activity on the server and also reports intrusions to the IPS management console.

There are common techniques for finding out if there is an intrusion in process, and most IPSs use both to get the best coverage. One is misuse detection which compares monitored activities with signatures of known attacks. If a known attack signature is recognized, the IPS will issue an alert and discard the suspicious packet. Since there are constantly new attacks being created, the database of attack signatures must always be kept up to date. Another technique is anomaly detection which compares activities that are monitored with “normal” activities for that network, and if a major deviation is found such as a large number of failed login attempts, the IPS issues an alert and discards the suspicious packet. This works best in a stable network, but the drawback is that it could be a false alarm.

IPS is typically used in conjunction with other security tools like fire walls. Unfortunately IPS sensors and management consoles are frequently targeted for attacks, so the IPS must be kept very secure. Many organizations use IPSs from different vendors to overlap or increase coverage.

As well as having an IPS to detect intrusions, an organization must have a plan for responding to intrusions immediately. If an organization needs help assembling an emergency team for this purpose, they can contact CERT, the Internet’s emergency response team for assistance. Responding to intrusions is not always straight-forward, as an attack such as a DoS may come from the IP address of a client for example, so simply discarding all messages from that IP would cause the company to miss important messages.

Types of security controls include user training, use of antivirus software, fire walls, and encryption both in transit and on servers as well as IPS. One source for obtaining security is Snort, which is an open source network intrusion prevention and detection system (IDS/IPS) developed by Sourcefire. Snort is widely deployed and utilizes signature, protocol, and anomaly based inspection. It performs real-time traffic analysis and packet logging on IP networks. The company states that it can detect a variety of attacks and probes such as buffer overflows, stealth port scans, CGI attacks, and SMB probes. It uses flexible rules language to describe traffic that should be stopped or passed. The detection engine uses modular plug-in architecture. It has three primary uses: straight packet sniffer, packet logger, or full-blown network intrusion prevention system. Snort has real-time alerting capabilities with several options for alert formats. The company feels that open source can be leveraged to create superior software because thousands of programmers review and test the functionality of the Snort engine and rule sets, therefore they can detect and respond to new attacks faster than a closed environment. Snort is free; it can be downloaded from www.snort.org and requires some other software to be downloaded to use it.

Snort’s parent company Sourcefire is an open source security company which offers several levels of IPS options for purchase.  IPSs were invented to serve the needs of large organizations to detect hacks from within their network, and since fire walls are external-facing, they would not work for that use. However smaller networks do not have the security staff to configure an IPS, or the budget as these systems are expensive. Sourcefire has introduced an entry-level system, IPSx which strips out features designed for larger networks such as advanced policy management and custom workflow. The entry level system keeps the core reporting and alerts, pre-defined policies, and a simple interface, but it still runs from $18,000 to $35,000. Sourcefire’s full IPS are even more expensive.

Having an IPS requires an investment in time for a company’s IT professional to configure and update the system, and because of constantly changing threats, the system will constantly require configuring and updating. A large company can purchase an IPS but it is expensive and will still need constant monitoring. However, this is an investment in time and/or money that any size company must consider. Having a network open to intrusion can result in huge amounts of damage to the system, and to the company and its clients in the form of stolen data.

References

Dunn, John. “Sourcefire takes Intrusion Prevention to Masses.” 18 April 2011. PC World. 12 August 2011 <http://www.pcworld.com/article/225443/sourcefire_takes_intrusion_prevention_to_masses_with_ipsx.html&gt;.

Fitzgerald, Jerry. Business Data Communications and Networking. 10th. Hoboken: John Wiley & Sons, 2009.

Snort. 2010. 12 August 2011 <http://www.snort.org&gt;.

Sourcefire. 2011. 12 August 2011 <http://www.sourcefire.com/&gt;.

VLAN

Virtual LAN, or VLAN, is a new type of LAN-BN (backbone) architecture which is made possible due to new intelligent high speed switches. VLAN is a network in which software is used instead of hardware to assign computers to LAN segments. By using software, computers can be moved from one segment to another without needing to touch physical cables.

A single switch VLAN operates inside one switch – computers are connected into one switch and assigned by software to different VLAN segments. Computers in the same VLAN act as though they are connected to the same physical switch or hub in a subnet. VLAN switches can also create multiple subnets and act as layer-3 switches, or routers, except the subnets are inside the switch instead of between switches. A broadcast message sent by one computer in one VLAN segment is sent only to computers on the same VLAN. The VLAN can be designed to act as though the computers are connected by a hub or by switches. A switched circuit set up is preferable to shared circuit of hubs, but VLAN switches with the capacity to provide switched circuits for hundreds of computers are more expensive.

The pros of VLANs are that they are faster than traditional LAN-BN routed architectures, and there are better opportunities to manage the flow of traffic. A big benefit is that you do not have to assign computers to subnets based on geographic closeness: a multiswitch VLAN has several switches used to build VLANs, and subnets can be created that contain computers in different buildings, so subnets can be created based on who you are rather than where you are.

Another benefit of VLANS is that the traffic on the LAN and BN can be managed very precisely; therefore faster performance can be obtained by allocating resources to manage broadcast traffic.  The ability to prioritize traffic is another benefit. The VLAN tag information included in the Ethernet packet defines which VLAN it belongs to and specifies a priority code based on IEEE 802.1q standard. Therefore you can use QoS capabilities in the data link layer and can connect VOIP telephones directly into the VLAN switch and configure the switch to reserve sufficient network capacity so they will always be able to make and receive calls.

Drawbacks of VLANs include greater cost, greater complexity, and the risk of using newer technologies. Therefore VLANs are typically only used for larger networks.

VLANs work by assigning each computer into a VLAN with a VLAN id number which is matched to a traditional IP subnet so each computer also receives a traditional IP address assigned by the VLAN switch which works as a DHCP server. Most VLAN switches can support 255 VLAN simultaneously, so each switch can support up to 255 separate IP subnets.

Computers are assigned to VLAN and IP subnet based on the physical port (jack the cable plugs into) on the switch they are connected to. The network manager uses software to assign the computers to specific VLANs using their physical port numbers, so it is easy to move a computer from one VLAN to another. If a VLAN switch receives a frame destined for another computer on the same subnet and on the same VLAN switch, the switch acts as a traditional layer-2 switch and it forwards the frame unchanged to the correct computer. If the computer sends a message to a computer in the same subnet but a different VLAN switch, the first VLAN switch changes the Ethernet frame by inserting the VLAN id number and priority code into the VLAN tag field and transmits the frame over the trunk to the other switch which removes the VLAN tag information and transmits it to the destination computer.

Transitioning from IPv4 to IPv6

IPv4 stands for Internet Protocol version 4: a set of technical rules that define how computers communicate over the internet. It is the underlying technology that allows us to connect devices to the web. Whenever a device (PC, Mac, smart phone, etc.) accesses the internet, it is assigned a unique numerical IP address. In order to send data from one computer to another via the web, a data packet must be transferred that contains the IP address of both the sending and receiving devices. IP addresses are an essential part of the web infrastructure, as they are needed to communicate and send data. IPv4 was developed for ARPANET in 1978, and has been deployed since 1981.

IPv4 is based on a 32 bit dotted decimal notation, and provided 2^32 (about 4.2 billion) addresses. IANA (Internet Assigned Numbers Authority) is the entity in charge of allocating IP address space to RIRs (Regional Internet Registries) which in turn distribute the addresses to various corporations and institutes. It has been known for several years that the number of available IPv4 addresses was being depleted, and in fact the last allotment was distributed to RIRs on 2/3/2011.

IPv6 is the sixth revision to Internet Protocol and the successor to IPv4. It also provides a unique numerical IP address so internet enabled devices can function. However IPv6 is based on a 128 bit hexadecimal notation, and will provide 2^128 (approximately 340,282,366,920,938,463,463,374,607,431,768,211,456) IP addresses. While not infinite, this will be sufficient to provide internet addresses for all the PCs, Macs, smart phones and other internet devices for many years to come.

The difficulty with getting all entities to switch from IPv4 to IPv6 is that v6 is not backwards compatible to v4. IPv4 and IPv6 run as parallel networks, so exchanging data between protocols requires special gateways. In order to switch, all operating systems, software, routers and firewalls must be upgraded, and vendors’ IT and customer service staff require training. While most newer operating systems are IPv6 capable, a large percentage of individuals and businesses are still using operating systems and devices that are IPv4 capable only. As we continue into this transition period, more vendors will find that they need to support both IPv4 and IPv6.

An important solution for working with both protocols is dual stacking. A dual stacked device can work with both IPv4 and IPv6 devices and other dual stacked devices. DNS (Domain Name System) works with dual stacked devices – if given an IPv4 address the dual stack device will send an IPv4 packet, and if given an IPv6 address it will send an IPv6 packet. The drawback of dual stacking is that everything would need an IPv4 and an IPv6 address, and systems already on IPv6 only would not have an IPv4 address. NAT (Network Address Translation) allows a device such as a router to provide one IP address for use by a private network of computers and devices, thereby reducing the number of IPv4 addresses needed by modifying IP address information in IP packet headers while in transit. One drawback of NAT is that it was designed to give IETF time to work on the IP address depletion issue, so it is a temporary fix not designed with security or privacy in mind, making it a weak link for attacks. Also, some entities may rely on using NAT to avoid going to the time and expense of upgrading to IPv6, when they should be making that the priority.

References

Klein, Joe. “IPv6 Playground – Next Hope.” 23 July 2010. IPv6Sec. 21 July 2011

<https://sites.google.com/site/ipv6security/>.

IPv4 Depletion – IPv6 Implementation. 03 February 2011. 21 July 2011

< https://www.arin.net/knowledge/v4-v6.html>.

Doyle, Jeff. The Dual Stack Dilemma. 04 June 2009. 21 July 2011

< http://www.networkworld.com/community/node/42436&gt;.

WiMax

WiMax, or Worldwide Interoperability for Microwave Access, is described on the www.WiMax.com website as being a standards initiative to ensure broadband wireless radios interoperate from vendor to vendor. The website states that WiMax has the potential to replace telephone company’s copper wire networks, cable TV’s coaxial cable infrastructure, as well as cellular networks. WiMax is the IEEE standard 802.16, and has gone through many updates. IEEE has just recently approved 802.16m – Advanced Air Interface for Broadband Wireless Access Systems, also called Mobile WiMax Release 2.  WiMax is intended for use in Metropolitan Area Networks.

This technology is touted as offering a six mile range and throughput of 72 Mbps using one base station, although it is admitted that in real world applications, a fixed broadband set up would offer a maximum of 45 Mbps, and a mobile set up would have a spectral efficiency of 5bps/Hz, which is still considered better than 3G. The WiMax website compares this service to a coffee shop wi-fi range of 100 yards and a throughput of 11 Mbps. However the WiMax offers the security of multi-level encryption and QoS  (Quality of Service) dynamic bandwidth allocation.

A WiMax radio contains a transmitter and receiver and generates electrical oscillations in the carrier frequency between 2 and 11 GHz. Most solutions use separate radios and antennas. While traditional wireless networks require antennas to be on the highest point such as a mountain top or skyscraper, for WiMax the line of sight (LOS) is most important. With a LOS solution the range for reception is about 10 miles, while non line of sight (NLOS) set ups offer a range of about 4 to 5 miles.  WiMax solutions include point to point (P2P) with one sender and one receiver, and point to multi point (PMP) distribution in which one base station can service hundreds of dissimilar subscribers.  The website describes LOS as offering the best function, while NLOS is described as “acceptable” and “adequate”.

WiMax offers three types of antennas. Omni-directional antennas broadcast 360 degrees from base stations and are used in PMP configurations. The drawback is that as the energy is diffused, the range and signal strength become limited. Sector antennas focus the beam on a smaller area, usually 60, 90 or 120 degrees, which offers a greater range and throughput. Panel antennas are powered by Ethernet cable, called Power over Ethernet (PoE) and are typically used for P2P solutions.  The subscriber station, or CPE (Customer Premise Equipment) can be outdoors or indoors. Outdoors CPE offers somewhat better performance as the reception is not impeded by walls, but the installation costs more and it must be weather resistant. An indoor CPE is installed by the subscriber so easier and quicker to obtain.

802.16e adds Scalable Orthogonal frequency-division multiple access (OFDMA) and Multiple In and Multiple Out (MIMO) antenna systems to the Physical Layer.  OFDMA and MIMO can help users avoid interference by changing frequencies.  For the MAC Layer 802.16e uses Convergence Sublayers as wireline technologies such as Ethernet, ATM and IP are encapsulated in the air interface. Secure communications are delivered by using secure key exchange during authentication and encryption with AES or DES during data transfer.

Whether WiMax can beat wi-fi remains to be seen. A recent news article in ars technica states that the WiMax operator Clearwire is being sued by users for “throttling” or usage caps, that keep internet connections to much lower speeds than advertised. Some think that Clearwire accepted more new customers than they had the infrastructure to handle, in the hopes that they could use the subscriber fees to build up their network. This type of incident could cause individuals and companies to think twice before trying this technology.

 

References

WiMax.com. 2011. WiMax.com Broadband Solutions, Inc. 13 Jul. 2011. <http://www.WiMax.com/>

Cheng, J. (2011). Wimax throttling lawsuit. ars technica, Retrieved from http://arstechnica.com/telecom/news/2011/03/wimax-throttling-lawsuit-clearwire-cant-deliver-the-goods.ars

Advantages and Disadvantages of Increased Access to Medical Records

 

Recently, services such as EHRs (Electronic Health Records) and PHRs (Personal Health Records) have become available to store an individual’s medical information online. This information can include medical history, lab tests and results, prescription drugs, and other medical procedures. EHRs are records that are created by a doctor or hospital, and are stored at a doctor’s office, hospital, insurance company, or even an employer. PHRs are personal records that are created and managed by the individual.

There are many benefits to individuals having access to records online. It would be a central place to store records and information from all of one’s medical providers. For instance, if an individual regularly sees a general practitioner, a gynecologist or urologist, and a sports medicine specialist, all of the records from the different doctors including lab results and prescription history could be stored there, making it easier to look up details and transfer records to different doctors. For a parent with a spouse and several children, or a caregiver for elderly parents, the benefits of having instant access online to each of these individuals’ records are obvious. Trying to remember all the medications an elderly person is on, the dosing instructions, and which doctor ordered each can be a daunting task.

The obvious drawback to online health records is the question of sufficient security of those records. These days there are constant notices of large organizations including banks and security firms such as RSA being compromised. An individual’s health records are some of the most private information data available, and if the records become compromised, potential embarrassment is the least of one’s concerns. At the least, medical data could be sold to advertisers to send targeted messages tailored to an individual’s medical issues. Among the worst possible scenarios, an individual’s medical history could be held against them by a potential employer.

When reading about EHRs, I learned that the Health Information Technology for Economic and Clinical Health (HITECH) Act of 2009, directed the Office of the National Coordinator for Health Information Technology (ONC) to implement the use of EHRs by every individual in the United States by 2014.[1] In a letter to the ONC dated May 16, 2011, Daniel R. Levinson Inspector General for the Department of Health and Human Services, found that there were general IT security controls lacking in HIT (Health Information Technology standards) including encrypting of data stored on mobile devices, requiring two-factor authentication when remotely accessing an HIT system, and patching the operating systems of the computer systems that would process and store the EHRs. Levinson also indicates that “the ONC deferred at this time to the HIPAA Security Rule for addressing IT security for HIT” but that HIPAA reviews identified vulnerabilities about the effectiveness of IT security.  It is alarming that the government is moving ahead with plans to require EHRs for all individuals, while not yet mandating adequate security.

The other side of this coin is PHRs. There are currently a few options for individuals who wish to create and manage their own medical history files online, including Google Health and Microsoft Health Vault. These services allow an individual to create an account, and then upload or input their personal medical records. Individuals would be able to assign others who they would allow to access their accounts, and the vendors claim that the records would be secure. I checked both of these websites with an eye to whether I would feel secure using them. I use a Firefox “add on” service called Calomel SSL Validation, which grades the SSL security of any website visited. The scores are color coded and range from green (strongest security) to red (weakest security). Google Health scored orange which is the second weakest level. Thirty percent of the score is based on validation of the certificate and ten percent on the domain match, both of which Google Health passed. However thirty four percent of the score is based on symmetric cipher strength. AES or Camellia at 256 bits is considered a strong cipher, and at 128 bits a moderate cipher, but Google Health scored poorly by using RC4 which is considered a weak cipher. Next, the symmetric key length is considered; the larger the key the higher the strength. A 256 bit key would be considered strong, but at 128 bits length Google Health again chose the weaker option.

I checked the same SSL Validation scores for Microsoft Health Vault, and found they scored a somewhat more satisfactory yellow, a mid-level rating, as they chose to use a moderately strong symmetric cipher (AES-128) but a weak 128 bit symmetric key length. Health Vault also indicates that it is certified by TRUSTe and HONcode. [2] While Microsoft Health Vault seems to be set up a little more securely, it concerned me that there are options to log in using either a Windows Live ID or Facebook account. This is a red flag to me, as I don’t want anyone hacking my email or social networking site to have easy access to my medical records as well. Google Health is also connected to an individual’s Google accounts, bringing up the same fear.

From reading over both websites’ privacy policies and terms of use, both indicate that information can flow or be connected in certain ways between an individual’s PHR account and their other Google or Live accounts. Both as is typical have numerous disclaimers against any responsibility to their organizations, while at the same time listing usage rules that the users would be held accountable for.  The policies also mention that they may release an individual’s personal information “as required by law” or to “protect and defend the rights or property” of Microsoft or Google. These statements are rather vague and leave the possibility open for interpretation. Most importantly, as a third party that an individual chooses to release their medical data to, these companies are not responsible under HIPAA regulations.

In conclusion, while EHRs and PHRs could be extremely useful to individuals for managing their own or their dependents’ medical data, I find that there is still much work to be done to make these databases secure. Strong regulations for encryption and security as well as clear rules about the use and access to personal information must be created and implemented before these online records should be commonly used. While researching, I found several articles conjecturing that Google Health would be discontinuing this service shortly due to low adoption rates. I believe that until there is sufficient security in place and mandated, many potential users such as myself will choose not to participate.

References

http://docs.ismgcorp.com/files/external/ONCstandards051711.pdf

http://www.google.com/intl/en_us/health/about/privacy.html

https://account.healthvault.com/help.aspx?topicid=PrivacyPolicy&culture=en-US

http://www.hon.ch/HONcode/Patients/Visitor/visitor.html

http://www.markle.org/health/markle-common-framework/connecting-consumers

http://www.cdt.org/issue/personal-health-records


[1] Daniel R. Levinson, Audit of Information Technology Security Included in Health Information Technology Standards. May 16, 2011. http://docs.ismgcorp.com/files/external/ONCstandards051711.pdf

 

DMCA: Anti-Piracy or Anti-Fair Use?

The Digital Millennium Copyright Act or DMCA was signed into law by President Clinton on October 28, 1998. It was created in part to implement two WIPO (World Intellectual Property Organization) treaties: the WIPO Copyright Treaty and the WIPO Performances and Phonograms Treaty. These treaties require member countries to provide copyright protection to works from other member countries. The DMCA also addresses other issues related to copyright law. The DMCA modifies Title 17 of the United States Code, and it contains five titles, of which arguably the first two are the most important. Title I of the DMCA is the WIPO Copyright and Performances and Phonograms Treaties Implementation Act, and it also implements section 1201 which includes anti-circumvention provisions. Title II is the Online Copyright Infringement Liability Act which creates a safe harbor for online service providers under section 512.

As technology evolves, many new areas of copyright law must be created or clarified. Not that long ago, entertainment that required copyright mainly consisted of books and radio programs. Even when television and movies became common, copyright infringement mainly involved such things as protecting against copying a book or script without the author’s authorization. However there are now many types of media available that make it much easier to infringe – whether intentionally or not – on a copyright owner’s work. Movies or television shows can be copied on a tape or DVD, as can music CDs. Theatrical plays and concerts can be filmed on a cell phone camcorder. Some of these things are done by individuals wanting to keep a copy of the entertainment for their own use, but there are also individuals who make a career of piracy, or illegally copying movies, games and so on, in order to sell the copy for their own profit instead of the rightful copyright holder.  In order to protect against this type of piracy, companies have come up with technological DRM (digital rights management) tools that are meant to prohibit items from being accessed or copied illegally.

Since it is recognized that sometimes there are times when there are legal reasons for individuals to circumvent access controls, the DMCA has 7 exemptions built into section 1201. These include the right of libraries, archives and educational institutes to circumvent the access controls to determine if they wish to obtain authorized access to the work for their institution. Law enforcement, intelligence and other governmental entities are allowed to circumvent for official purposes. If they have received authorization to do so, reverse engineering of software programs is allowed in order to create interoperable programs. Encryption research to find vulnerabilities and flaws is allowable, but again only if the person has received approval to do so. Circumventing access controls in order to use technology for the protection of minors is allowed, as is circumvention in order to disable a product that disseminates personally identifiable information about the user. A final exemption is for the purpose of security testing.

In addition to the exemptions built into DMCA, the Library of Congress has hearings once every 3 years to decide if any additional types of exemptions to the anti-circumvention rules are warranted. The most recent rulemaking on anti-circumvention was in 2009 when the Library approved 6 exemptions to the Act. These exemptions will be valid until the next review in 2012. One important exemption allows making copies of short portions of movies from a legally purchased DVD for fair use purposes such as educational purposes, documentary film making, or noncommercial videos. Other exemptions include allowing circumvention to the access control DRM in order to enable text-to-speech programs on e-books (but only if there is not an audible version of the book that can be purchased). Circumventing DRM in order to do security testing on video games is allowed in order to investigate security flaws or vulnerabilities (often found on the DRM itself). One of the most highly anticipated rulings is the exemption from prohibition of “jailbreaking” a smartphone. Often the owner of an iPhone would like to bypass the DRM from Apple in order to make the operating system of the phone interoperable with other applications that are not approved by Apple. It was decided that this is a fair use and not a copyright infringement.

While civil liberties organizations hail these updated exemptions to anti-circumvention rules, they feel that there are still drawbacks to the Act because it gives broad authority to the copyright holder to determine who can circumvent the DRM. For example, although an exemption is allowed for individuals wishing to research security vulnerabilities and flaws, the individual still must first request permission from the copyright holder to break the DRM in order to do the research. If the copyright holder declines to grant this approval, then if the researcher circumvented the DRM and found flaws, he could not publish those flaws in order for them to be fixed, without exposing himself to potential litigation.

Section 512 of the DMCA involves a concept of “safe harbor” which protects online service providers from being held liable for information posted or transmitted on their site by subscribers, but only if the service provider quickly removes access to any material that is identified in a copyright holder’s complaint. The service provider must also show that they have no knowledge of or benefit from the infringing activity; have a copyright policy for their site and provide notification of the policy to their subscribers; and list an agent that will deal with copyright complaints for their site. If the service provider meets the criteria for safe harbor, they will not be liable for any damages that the individual who posted the material would be liable for.

If a service provider is notified of a complaint about copyright infringing materials on their site, they are not required to notify the individual who posted the material prior to taking it down. The provider does need to notify the poster once the material is removed, and must include information in the notice such as the name and address of the complaining party, which material is being questioned, and what the copyrighted material is. If the individual does not agree that the material they posted was a copyright infringement they can file a counter claim with the service provider, who must forward the counter claim to the person who made the complaint.

In its white paper Unintended Consequences: Twelve Years Under the DMCA”  the Electronic Frontier Foundation states that “Years of experience with the “anti-circumvention” provisions of the DMCA demonstrate that the statute reaches too far, chilling a wide variety of legitimate activities in ways Congress did not intend. As an increasing number of copyright works are wrapped in technological protection measures, it is likely that the DMCA’s anti-circumvention provisions will be applied in further unforeseen contexts, hindering the legitimate activities of innovators, researchers, the press, and the public at large.” Numerous examples can be found in this white paper as well as other sources such as J.D. Lasica’s book Darknet, illustrating uses made of DMCA provisions to harm competition and research that have little or nothing to do with stopping piracy. For example in 2003 some internal memos from Diebold Election Systems that discussed knowledge of software and security flaws in electronic voting machines were leaked. A number of students posted the information online, and were sent take down notices stating that if they did not remove the memos they would be sued under the DMCA. It was felt by many that this created censorship without due process. In other examples, printer companies used proprietary chips to indicate that an ink cartridge was empty, so that if a consumer had the cartridge refilled by an after-market ink company, the cartridge would not work in the printer because the chip would indicate that it was empty. When the after-market ink companies disabled the chips, the large printer companies sued under DMCA because their access control device was tampered with. The result of this would be stifling competition, not impeding piracy.

In the end the question becomes whether the DMCA rules are indeed protecting the concerns of the copyright holders and discouraging piracy, and if copyright law is the appropriate field for some of these statutes, as many of the rulings have strayed into licensing and how consumers may use products that they purchase, rather than simply preventing works from being copied for illegal distribution. Many fear that these rules and the uses large companies are trying to make of them foreshadow censorship and reduced creativity. Any way you look at it, this is a very sticky issue.

 

References
Anderson, Nate. “Apple loses big in DRM ruling: jailbreaks are “fair use”.” Law and Disorder.  ars

technica. July 26, 2010. <http://arstechnica.com/tech-policy/news/2010/07/apple-loses-big-in-drm-ruling-jailbreaks-are-fair-use.ars>

 

Lasica, J.D. Darknet: Hollywood’s War Against the Digital Generation. Hoboken: John Wiley & Sons, 2005.

Print.
“Anticircumvention Rulemaking.” U.S. Copyright Office. Web. 26 Apr. 2011.

<http://www.copyright.gov/1201/&gt;.http://www.copyright.gov/1201/

Chilling Effects Clearinghouse. Web. 26 April 2011.<http://www.chillingeffects.org>

 

“Unintended Consequences: Twelve Years Under the DMCA.” Electronic Frontier Foundation. March 2010.

<http://www.eff.org/wp/unintended-consequences-under-dmca>

Electronic Signature in Global and National Commerce Act (E-SIGN)

The E-SIGN Act (Electronic Signature in Global and National Commerce Act) is a federal law created by Congress to “facilitate the use of electronic records and signatures in interstate or foreign commerce”. It was enacted in 2000 after the UETA (Uniform Electronic Transactions Act) was created in 1999. The UETA has been adopted in whole or part by 48 states. UETA provides that both parties must have agreed to conduct the transaction electronically in order for the law to apply, though the agreement may be implied rather than explicit. For an electronic document to be valid, it must be in a form that may be retained and accurately reproduced. UETA further provides that if state law requires a document to be notarized, the e-signature of a notary is acceptable. The E-SIGN act requires that “no contract, record or signature may be denied legal effect solely because it is in electronic form”. If a state has enacted the uniform or full version of UETA, then state law prevails. If a state enacted a modified version of UETA, then E-SIGN overrides any modifications to UETA that are inconsistent with E-SIGN.

The Statute of Frauds requires that contracts for sales of more than $500 and leases for more than $1,000 must be in writing. The E-SIGN act provides that e-contracts and e-signatures meet the requirement of writing, so the Statute of Frauds applies to electronic contracts as well as written ones.

A digital signature is described as “an electronic sound, symbol or process attached to or associated with a record and executed or adopted by a person with the intent to sign the record”. There are two types of digital signatures – one is a digitalized handwritten signature in which a person can use something like an ePad and a software program to write their signature so they can attach it electronically to documents. The other type is a public-key infrastructure-based digital signature. The signed has a private key to use to put their signature on a document, and the recipient has a public key to verify the identity of the signor.

By providing that a contract, record or signature cannot be denied legal effect solely because it is in electronic form, E-SIGN provides legal effect to an electronic signature.

Limited warranties vs. implied warranties

The Apple iPad, as with other Apple products, comes with a one year limited warranty. The warranty is considered limited under the Magnuson-Moss Warranty Act because it has a time limitation, and also because it is limited to the original end-user purchaser. This product, as with all goods sold, is covered by the UCC. Under this warranty, Apple is the warrantor, and the consumer in the person of the original end-user purchaser is the warrantee. Close examination of the warranty (which required an actual magnifying glass) makes me think that the warranty is as much or more for the benefit of Apple as for the consumer.

The warranty states that Apple warrants the hardware against defects in materials and workmanship under normal use for one year from the date of retail purchase by the original end-user. If a hardware defect is found and a valid claim is made within the warranty period, Apple can choose to either repair the unit with either new or refurbished parts, or replace the unit with a new or refurbished unit, or refund the purchase price.

Apple has avoided being responsible for any implied warranties by stating in capital letters, “Apple specifically disclaims any and all statutory or implied warranties, including, without limitation, warranties of merchantability and fitness for a particular purpose and warranties against hidden or latent defects.” Presumably by placing this paragraph in all capital letters, Apple meets the requirements under the UCC 2-316 that the disclaimer must be “conspicuous”.

The warranty also has an “exclusions and limitations” section which specifies various items or incidents that would not be covered, such as damage due to an earthquake, and damage caused by repairs done by a non-Apple service provider. The warranty also specifies: “do not open the hardware product. Opening the hardware product may cause damage that is not covered by this warranty. Only Apple or an authorized Apple Service Provider should perform service on this hardware product.”

Marc L. Roark wrote an interesting issue brief for the Duke Law & Technology Review about the use of limitation of warranties to deter consumers from circumventing intellectual property rights. Apple does not want consumers to open their products and make their own updates to the product, whether it is iPhone users wishing to use the phone on a non-contractual carrier or iPod/iPad users who want to add apps that are not supported by Apple. The reason is two-fold: first because Apple makes a profit when consumers use their contractual carrier or their supported apps, and second because Apple does not want hackers or pirates to copy their products.

However, the patent and copyright law first sale doctrine allows consumers to alter a product after they purchase it, so manufacturers are using limitations on the warranty as incentive to consumers to not open the hardware. By voiding the warranty if the consumer opens the unit, uses the unit in an unauthorized manner, or seeks repair from a non-approved vendor, Apple seeks to regain control over the devices it sells. In the behavioral model discussed by Roark in his brief, the results suggested that the threat of having the warranty voided was three times more effective as a deterrent to opening the device than the threat of litigation, mainly because most people did not feel they were very likely to be prosecuted for opening the device, whereas they felt it was more likely the warranty would indeed be voided. However, the study concludes that many consumers do not feel that the loss of Apple’s limited warranty is much of a loss; therefore they believe that having a better warranty would be a better deterrent for unauthorized activities.

With this limited warranty, Apple seems to have created enough loopholes for themselves that it seems as though few consumers would benefit from the warranty; therefore I feel that the warranty is more of a protection to the warrantor than the warrantee in this case.

References

Roark, Marc L.  “Limitations of Sales Warranties as an Alternative to Intellectual Property Rights: an Empirical Analysis of iPhone Warranties’ Deterrent Impact on Consumers.” 2010 Duke L. & Tech. Rev. 018 (2010):  n.pag.  Duke Law & Technical Review.  Web.  16 Mar. 2011.

Liuzzo, Anthony L.  Essentials of Business Law. New York:  McGraw-Hill, 2010.  Print.

Climate Change : The Copenhagen Diagnosis

Global Warming and Climate Change have been hotly debated topics over the past few years. As signs of change such as the hole in the ozone layer and melting of arctic ice develop, many people feel this is due to anthropogenic or man-made causes such as burning of fossil fuels. Others believe that any global warming is a natural occurrence and would happen regardless of human influence. This debate has strong proponents on both sides.

Currently there has been a debate in the U.S. House of Representatives regarding new powers effective January 2, 2011 that allow the Environmental Protection Agency (EPA) to impose greenhouse gas (GHG) regulations under the Clean Air Act. On February 2, 2011 the House Energy and Commerce Committee held a hearing on “H.R. 910, The Energy Tax Prevention Act of 2011”, which aims to amend the Clean Air Act so the EPA cannot use it to regulate GHG. A request was made to have another hearing to focus on the science of the issue, and on March 8, 2011 the Committee held a hearing entitled “Climate Science and EPA’s Greenhouse Gas Regulations”. The Republicans brought 2 expert witnesses to debunk claims that global warming is being negatively affected by human influence, and the Democrats brought 5 expert witnesses to try to convince the Committee that there is a valid problem.

I read through the statements from this hearing with interest, and found that unfortunately it seems politics are having too high of an influence on science. The Republicans’ main interest seemed to be in proving that concerns about the effects of global warming are merely a scare tactic, and that stemming the human activities that may affect the environment would be too costly in terms of money and jobs. Each side had witnesses with many scientific credentials, and they all seemed to have some valid points. However the reasoning of their points seem to fall very much along parameters of the goals of the politicians, making it difficult for a layman to decide which argument to put more faith in.

One of the Republicans’ witnesses was John R. Christy, Distinguished Professor of Atmospheric Science from the University of Alabama. He stated that although attributing extreme weather events to anthropogenic influence was popular; as weather systems are dynamic it is likely for extreme weather events to happen often. He cited the 20th Century Reanalysis Project which seeks to generate weather maps as far back as 1871, and has found that the three major indices related to extreme events, the Pacific Walker Circulation, the North Atlantic Oscillation, and the Pacific-North America Oscillation, do not show a trend of increased circulation since 1871. Therefore, he believes there is no supporting evidence that human factors influenced the major circulation patterns during that time period. Christy goes on to state that we should expect extreme events to occur and plan for them; and that the measured climate history we have of about 130 years is not large enough to be representative of possible climate extremes. Christy also supports the Republicans’ argument for costliness of action by stating, “Developing countries in Asia already burn more than twice the coal than North America does and that discrepancy will continue to expand. The fact that our legislative actions will be inconsequential in the grand scheme of things can be seen….”

The next witness was Roger A. Pielke Sr. from the University of Colorado and Colorado State University. Pielke believes that just focusing on carbon dioxide and a few other greenhouse gases was too narrow, and missed other important human influences on the climate. He also gave explanations of the difference between global warming and climate change. According to Pielke, “Global warming is typically defined as an increase in the global average surface temperature. A better metric is the global annual average heat content measured in Joules. Global warming involves the accumulation of heat in Joules within the components of the climate system. This accumulation is dominated by the heating and cooling within the upper layers of the oceans. Climate Change is any multi-decadal or longer alteration in one or more physical, chemical and/or biological component of the climate system.” He also states that “Detection of robust anthropogenic signals in regional climate predictions is seldom possible within…timescales of a few decades.” Pielke also spent time denigrating the CCSP (U.S. Climate Change Science Program) and IPCC (Intergovernmental Panel on Climate Change) as only using the research of a few scientists and excluding valid scientific perspectives. From an outside layperson’s perspective, it seemed as though he might be airing “sour grapes”.

Another witness was Christopher B. Field, Director of Department of Global Ecology at Carnegie Institute for Science. Field discussed “a series of robust conclusions from climate science… including two 2010 reports from the US National Academy of Sciences, “Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia” (Solomon 2010), and “Advancing the Science of Climate Change” (Matson 2010), the 2009 report from the US Global Change Research Program, “Global Climate Change Impacts in the United States” (Karl et al. 2009) and the Fourth Assessment Report of the IPCC (IPCC 2007a, c, b).” His conclusions seemed quite contrary to some of the other witnesses: “Global warming is unequivocal and primarily human-induced. Climate changes are underway in the United States and are projected to grow. Widespread climate-related impacts are occurring now and are expected to increase. Climate change will stress water resources. Crop and livestock production will be increasingly challenged. Coastal areas are at increasing risk from sea-level rise and storm surge. Risks to human health will increase. Climate change will interact with many social and environmental stresses. Thresholds will be crossed, leading to large changes in climate and ecosystems. Future climate change and its impacts depend on choices made today.”

Dr. Knute Nadelhoffer, PhD Director, University of Michigan Biological Station followed by stating that climate change is real and that science has irrefutably shown that rising concentrations of greenhouse gases are resulting from human activities and that there are no other scientific explanations for climate changes occurring. Nadelhoffer brought a letter to append to his testimony signed by 149 scientists from Michigan who agreed with his ideas. Richard C. J. Somerville of the Scripps Institute added his testimony that the most recent research and latest observations demonstrates that climate change is occurring and in many cases is exceeding earlier projections. He cites the IPCC’s most recent Assessment Report, AR4 from 2007, but also refers to newer findings documented in the Copenhagen Diagnosis which was published in 2009.

The Copenhagen Diagnosis was compiled by 26 scientists from 8 different countries, and was not affiliated with any organization. The findings included that global carbon dioxide emissions from fossil fuels had increased by 40% between 1990 and 2008, and that even if emissions were stabilized at current levels, within twenty years there would be a 25% probability of warming of 2 degrees Celsius.  A delay in taking action could result in irreversible changes to continental ice sheets, Amazon rain forests and West African monsoons.  “The risk of transgressing critical thresholds (“tipping points”) increases strongly with ongoing climate change. Thus, waiting for higher levels of scientific certainty could mean that some tipping points will be crossed before they are recognized.” Somerville points to evidence of warming in air temperatures, ocean temperatures, melting ice, and rising sea levels. He concludes: “The greenhouse effect is well understood. It is as real as gravity. The foundations of the science are more than 150 years old. Carbon dioxide in the atmosphere amplifies the natural greenhouse effect and traps heat. We know carbon dioxide is increasing because we measure it. We know the increase is due to human activities like burning fossil fuels because we can analyze the chemical evidence for that. Our climate predictions are coming true. Many observed climate changes, like rising sea level, are occurring at the high end of the predicted changes. Some changes, like melting sea ice, are happening faster than the anticipated worst case. Unless mankind takes strong steps to halt and reverse the rapid global increase of fossil fuel use and the other activities that cause climate change, and does so in a very few years, severe climate change is inevitable. Urgent action is needed if global warming is to be limited to moderate levels.”

As a layperson it is  hard to sort out the competing arguments of scientists and decide first if climate change is indeed happening, and second if it is happening whether anthropogenic influence is the cause, and third what if anything should be done to reduce any human activities that may influence this process. I believe that since it could be decades before scientists have enough data to draw conclusive opinions, it is better to err on the side of caution. We know that burning fossil fuels, deforestation in the Amazon and Indonesia, and polluting the oceans are all bad for the environment. Allowing corporations to influence lawmakers to weaken regulations so corporations can make a higher profit will ultimately result in disasters such as the BP Horizon oil spill in the Gulf. Arguing that we should not reduce our level of pollution because other countries may not reduce their emissions seems like a political rather than scientific point. Reducing emissions, deforestation and pollutants is a worthwhile goal even if the extent that it might affect climate change is not yet determined.

Works Cited:

Broder, John. “At House E.P.A. Hearing, Both Sides Claim Science.” The New York Times. 09 March 2011. <http://www.nytimes.com/2011/03/09/science/earth/09climate.html>

U.S. House. Energy and Commerce Committee. Climate Science and EPA’s Greenhouse Gas Regulations. (H.R. 910 The Energy Tax Prevention Act of 2011) March 8, 2011. <http://energycommerce.house.gov/hearings/hearingdetail.aspx?NewsID=8304>

The Copenhagen Diagnosis, 2009: Updating the World on the Latest Climate Science.

I. Allison, N.L. Bindoff, R.A. Bindschadler, P.M. Cox, N. de Noblet, M.H. England, J.E. Francis, N.

Gruber, A.M. Haywood, D.J. Karoly, G. Kaser, C. Le Quéré, T.M. Lenton, M.E. Mann, B.I. McNeil,

A.J. Pitman, S. Rahmstorf, E. Rignot, H.J. Schellnhuber, S.H. Schneider, S.C. Sherwood, R.C.J.

Somerville, K. Steffen, E.J. Steig, M. Visbeck, A.J. Weaver. The University of New South Wales

Climate Change Research Centre (CCRC), Sydney, Australia, 60pp.