Friday 28 March 2014

Microsoft: Enterprise Mobility for Every Business and Every Device

Here is an article from Brad Anderson, Corporate Vice President, Windows Server & System Center.
Earlier today in San Francisco, Satya spoke about the wide-ranging work Microsoft is doing to deliver a cloud for everyone and every device. Satya’s remarks certainly covered a lot of ground – including big announcements about the availability of Office on the iPad, as well as the release of what we call the Microsoft Enterprise Mobility Suite.

Regarding the Enterprise Mobility Suite (EMS), I want to share some additional details about the upcoming general availability of Azure Active Directory Premium, as well as our latest updates to Windows Intune.

If you haven’t had a chance to read this morning’s post from Satya, I really recommend checking in out here. In the post, Satya talks about the focus of our company being “Mobile First – Cloud First.” I love this focus! The mobile devices that we all use every day (and, honestly, could not live without) were built to consume the cloud, and the cloud is what enables these devices to become such a critical and thoroughly integrated part of our lives.

For years I have emphasized that, as we architect the solutions that help organizations embrace the devices their users want to bring into work (i.e. BYOD), the cloud should be at the core of how we enable this. As I have worked across the industry with numerous customers it is clear that embracing a cloud-based infrastructure for Enterprise Mobility has become the go-to choice for forward-looking organizations around the world who want to maximize their Enterprise Mobility capabilities.

Enterprise Mobility is a big topic – so big, in fact, that it extends beyond mobile device management (MDM) and the need to address BYOD. Now Enterprise Mobility stretches all the way to how to best handle new applications and services (SaaS) coming into the organization. Enterprise Mobility also has to address data protection at the device level, at the app level, and at the data level (via technologies like Rights Management).

With these challenges in mind, we have assembled the EMS to help our customers supercharge their Enterprise Mobility capabilities with the latest cloud services across MDM, MAM, identity/access management, and information protection.

On one point I do want to be very specific: The EMS is the most comprehensive and complete platform for organizations to embrace these mobility and cloud trends. Looking across the industry, other offerings feature only disconnected pieces of what is needed. When you examine what Microsoft has built and what we are delivering, EMS is simply the only solution that has combined all of the capabilities needed to fully enable users in this new, mobile, cloud-enabled world.

Additionally, with Office now available on iPad, and cloud-based MDM from Intune, over time we will deliver integrated management capabilities for Office apps across the mobile platforms. 

The capabilities packaged in the EMS are a giant step beyond simple MDM. The EMS is a people-first approach to identity, devices, apps, and data – and it allows you to actively build upon what you already have in place while proactively empowering your workforce well into the future.

The EMS has three key elements:
  • Identity and access management delivered by Azure Active Directory Premium
  • MDM and MAM delivered by Windows Intune
  • Data protection delivered by Azure AD Rights Management Services


<< Cloud-based Identity & Access Management >>
Azure Active Directory (AAD) is a comprehensive, cloud-based identity/access management solution which includes core directory services that already support some of the largest cloud services (including Office 365) with billions of authentications every week. AAD acts as your identity hub in the cloud for single sign-on to Office 365 and hundreds of other cloud services.

Azure AD Premium builds on AAD’s functionality and gives IT a powerful set of capabilities to manage identities and access to the SaaS applications that end-users need.

Azure AD Premium is packed with features that save IT teams time and money, for example:
  • It delivers group management and self-service password reset – dramatically cutting the time/cost of helpdesk calls.
  • It provides pre-configured single sign on to more than 1,000 popular SaaS applications so IT can easily manage access for users with one set of credentials.
  • To improve visibility for IT and security, it includes security reporting to identify and block threats (e.g. anomalous logins) and require multi-factor authentication for users when these abnormalities are detected.

The Azure AD Premium service will be generally available in April. For more info, check out this new post from the Azure team.


<< Cloud-delivered MDM >>
Windows Intune is our cloud-based MDM and PC management solution that helps IT enable their employees to be productive on the devices they love.

Since its launch we have regularly delivered updates to this service at a cloud cadence. In October 2013 and January 2014 we added new capabilities like e-mail profile management for iOS, selective wipe, iOS 7 data protection configuration, and remote lock and password reset.

Following up on these new features, in April we will also be adding more Android device management with support for the Samsung KNOX platform, as well as support for the upcoming update to Windows Phone.


<< Data Protection from the Cloud >>
Microsoft Azure Rights Management is a powerful and easy-to-use way for organizations to protect their critical information when it is at rest or in transit.

This service is already available today as part of Office 365, and we recently added extended capability for existing on-prem deployments. Azure RMS now supports the connection to on-prem Exchange, SharePoint, and Windows Servers.

In addition to these updates, Azure RMS also offers customers the option to bring their own key to the service, as well as access to logging information by enabling access policy to be embedded into the actual documents being shared. When a document is being shared in this manner, the user’s access rights to the document are validated each time the document is opened. If an employee leaves an organization or if a document is accidentally sent to the wrong individual, the company’s data is protected because there is no way for the recipient to open the file.


<< Cost Effective Licensing >>
Now with these three cloud services brought together in the EMS, Microsoft has made it easy and cost effective to acquire the full set of capabilities necessary to manage today’s (and the future’s) enterprise mobility challenges.

As we have built the Enterprise Mobility Suite we also have thought deeply about the need to really simplify how EMS is licensed and acquired. With this in mind, EMS is licensed on a per-user basis. This means that you will not need to count the number of devices in use, or implement policies that would limit the types of devices that can be used.

The Enterprise Mobility Suite offers more capabilities for enabling BYO and SaaS than anyone in the market – and at a fraction of the cost charged elsewhere in the industry.

* * *

This is a major opportunity for IT organizations to take huge leaps forward in their mobility strategy and execution, and Microsoft is committed to supporting every element of this cloud-based, device-based, mobility-centric transformation.

EMS is available to customers via Microsoft’s Enterprise Volume Licensing channels beginning May 1st.


Reference:
Enterprise Mobility for Every Business and Every Device
http://blogs.technet.com/b/in_the_cloud/archive/2014/03/27/enterprise-mobility-for-every-business-and-every-device.aspx

Wednesday 26 March 2014

Microsoft: Group Policy Setting to Disable the Junk E-mail UI and Filtering Mechanism for Microsoft Outlook

<< Symptoms >>
In Outlook, the Junk E-mail settings may not be available for you to configure. For example, when you look at the Delete group on the main Ribbon in Outlook 2010, the Junk control is disabled (grayed out). This is demonstrated in the following figure.

Or, in earlier versions of Outlook, the Junk E-mail Options button is missing from the Preferences tab in the Options dialog box.

 
<< Cause >>
This symptom occurs when you have the 'Hide Junk Mail UI' group policy configured for Outlook. The following figure shows the location of this policy for Outlook 2010 in the group policy editor.

This policy setting disables the Junk E-mail settings in the Outlook user interface as well as stopping the filtering of e-mail messages by the junk e-mail filter in Outlook.


<< Additional Information >>
When this policy is enabled, the following data in the registry is configured.

Key: HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\x.0\Outlook
DWORD: DisableAntiSpam
Value: 1 = policy is in effect

Note, x.0 in the above registry path represents the version of Outlook (15.0 = Outlook 2013, 14.0 = Outlook 2010, 12.0 = Outlook 2007, 11.0 = Outlook 2003)

If you need to change this policy, please work with your Administrator to configure the policy as either Disabled the policy or as Not Configured.

There is related group policy called Junk E-mail protection level that has a No Protection option, as shown in the following figure.

This is a policy similar to the Hide Junk Mail UI policy except with the 'No Protection' setting Outlook still performs some level of junk e-mail filtering. When you enable the Junk E-mail protection level policy with the No Protection setting, the following junk e-mail scanning features still take place on a message:
  • Links can be disabled if the message is suspected as being phish.
  • The message sender is compared against your Safe Senders and Blocked Senders lists, and the message is treated accordingly.
  • The message sender's domain suffix is compared against the Blocked Top-Level domains list, and the message is treated accordingly.
  • The message encoding is compared against the Blocked Encodings list, and the message is treated accordingly.

There is no need to enable the Junk E-mail protection level policy if the Hide Junk Mail UI policy is already enabled. However, if you disable or do not configure the Hide Junk Mail UI policy, then the Junk E-mail protection level policy is a possible alternative if you still need to manage junk e-mail filtering options in Outlook.


Reference:
Outlook: Policy setting to disable the Junk E-mail UI and filtering mechanism
http://support.microsoft.com/kb/2180568

Monday 24 March 2014

Microsoft: Special Characters in Files or Folders Name Not Supported in WebDAV

If a file name or a folder name includes special characters like an ampersand (&), a plus (+), a percentage (%), and so forth, the files or folders will not show up or work in a variety of WebDAV connection methods. This is a known issue because those characters are reserved characters in HTML. The only solution is to rename the files and folders in order for them to show up.

Friday 21 March 2014

IT Technology: Auto Clicker Typer


Auto Clicker Typer is handy utility to help you from doing the same task over and over again. With Auto Clicker Typer, you can automate text typing as well as mouse clicking. It is easy to use, as all you have to do is press the “Record” button to start the automation recording process, and click “Stop” to end. One thing that makes Auto Clicker Typer different, is that you can save your actions into a script for flexibility, like looping the actions. For convenience, it is possible for you to set a keyboard shortcut to quickly run the appropriate script at any time. This is the free version of Auto Clicker Typer, and you can create up to 10 scripts.


Reference:
Auto Clicker Typer
http://en.kioskea.net/download/download-12459-auto-clicker-typer

Thursday 20 March 2014

Paessler: Active Directory Integration

By following the steps below, you are able to integrate your Paessler PRTG with your Microsoft Active Directory.

Step 1: Prepare Your Active Directory
  • In your Active Directory, please make sure that users you want to give access to PRTG are member of the same AD group.
  • You can also organize users in different groups, for example, one group whose members will have administrator rights within PRTG, and another one whose members will have read-only rights within PRTG.

Step 2: Prepare Your PRTG Server
  • Make sure that the computer running PRTG is member of the domain you want to integrate it to. You can check this setting in your machine's System Properties (for example, Control Panel | System and Security | System , click on Change settings link).

Step 3: Add Domain and Credentials (optional) to System Settings
  • In the PRTG web interface, switch to the System Administration—System and Website settings.
  • In the Active Directory Domain field, enter the name of your local domain. Note: You can only integrate one AD domain into PRTG.
  • Optional : PRTG will use the same Windows user account used to run the "PRTG Core Server Service". By default, this is the "local system" Windows user account. If this user does not have sufficient rights to query a list of all existing groups from the Active Directory, you should provide credentials of a user account with full AD access by using the Use explicit credentials option.
  • Save your settings.

Step 4: Add a New User Group

  • Switch to the User Groups tab (see System Administration—User Groups ).
  • Click on the Add User Group button to add a new PRTG user group.
  • In the dialog appearing, enter a meaningful name and set the Use Active Directory setting to Yes .
  • From the Active Directory Group drop down menu, select the group of your Active Directory whose members will have access to PRTG. If you have a very large Active Directory, you will see an input field instead of a drop down. In this case, you can enter the group name only; PRTG will add the prefix automatically.
  • With the New User Type setting, define the rights a user from the selected Active Directory group will have when logging in to PRTG for the first time. You can choose between Read/Write User or Read Only User (latter is useful to show data only to a large group of users).
  • Save your settings.

Done
That's it. All users in this Active Directory group can now log in to PRTG using their AD domain credentials. Their user accounts will use the PRTG security context of the PRTG user group you just created. 

Notes
  • Active Directory users can log in to the web interface using their Windows username and password (please do not enter any domain information in PRTG's Login Name field). When such a user logs in, PRTG will automatically create a corresponding local account on the PRTG core server. Credentials are synchronized with every login.
  • By default, there aren't any rights set for the new PRTG user group. Initially, users in this group will not see any objects in the PRTG device tree. Please edit your device tree object's settings and set access rights for your newly created user group in the Inherit Access Rights section. Note: The easiest way is to set these rights in the Root Group Settings .
  • PRTG does not support SSO (single sign-on).


Reference:
Active Directory Integration
http://www.paessler.com/manuals/prtg9/active_directory_integration

Symantec: Can I delete contents in Symantec\LiveUpdate Administrator\Downloads?

You can delete the contents in C:\ProgramData\Symantec\LiveUpdate Administrator\Downloads but do not delete the Downloads folder itself. Besides, do not delete the contents when LUA is running a download schedule as it will corrupt the definitions.


Reference:
Can I delete contents in C:\Documents and Settings\All Users\Application Data\Symantec\LiveUpdate Administrator\Downloads?
http://www.symantec.com/connect/forums/can-i-delete-contents-cdocuments-and-settingsall-usersapplication-datasymantecliveupdate-admi

IT Management: SIM IT Trends Study: 2013

Here is a very interesting survey regarding IT Trends in 2013:
https://c.ymcdn.com/sites/www.simnet.org/resource/resmgr/it_trends/sim_it_trends_study_2013_-_s.pdf

I really like this quote:
May the wind always be at your back, may the sun shine warm upon your face, and may your uptime be 100% 24/7/365!

Tuesday 18 March 2014

Cisco: Stop a Command from Running in Cisco IOS

Generally, the CTRL+SHIFT+6 combination will stop the commands running in Cisco IOS.


Reference:
How to stop a command - make it stop running (such as ping)
https://supportforums.cisco.com/discussion/10331026/how-stop-command-make-it-stop-running-such-ping

Cisco: Set Up an IOS Router or Switch for SSH

You may setup a Cisco router or switch for SSH with the following commands / steps:

!--- Step 1: Configure the hostname if you have not previously done so.hostname Router

!--- The aaa new-model command causes the local username and password on the router
!--- to be used in the absence of other AAA statements.

aaa new-model
username cisco password 0 cisco

!--- Step 2: Configure the DNS domain of the router.
ip domain-name rtp.cisco.com

!--- Step 3: Generate an SSH key to be used with SSH.
crypto key generate rsa
ip ssh time-out 60
ip ssh authentication-retries 2

!--- Step 4: By default the vtys' transport is Telnet. In this case,
!--- Telnet is disabled and only SSH is supported.

line vty 0 4
transport input SSH

!--- Instead of aaa new-model, you can use the login local command.


Reference:
Configuring Secure Shell on Routers and Switches Running Cisco IOS
http://www.cisco.com/c/en/us/support/docs/security-vpn/secure-shell-ssh/4145-ssh.html

Monday 17 March 2014

Apple: MacDaddyX

MacDaddy is a powerful, yet easy-to-use MAC Address Changer (Spoofer). MacDaddy does not change the hardware burned-in MAC addresses. MacDaddy changes the "software based" MAC addresses. Allows changes to any NIC that permits using built-in utilities. MacDaddy helps people to protect their privacy by hiding their real MAC Addresses in the widely available Network. MacDaddy also helps Network and IT Security professionals to troubleshoot network problems, test Intrusion Detection / Prevention Systems (IDS/IPS,) test Incident Response plans, build high-availability solutions, recover (MAC Address based) software licenses, etc.


Reference:
MacDaddyX
https://www.macupdate.com/app/mac/25729/macdaddyx

Microsoft: Where is My Hidden Hard Disk Space?

There are a few ways for you to view your hidden hard disk space:

1.  Check your volume shadow copy storage (system restore & previous versions), recycle bin and pagefile

<< Volume Shadow Copy >>
You can check your volume shadow copy with the following steps:
  • Right click on Command Prompt
  • Click on Run as administrator
  • Type in the command: Vssadmin List Shadows

<< Pagefile >>
You can view or modify the pagefile settings as follows:
  • Launch sysdm.cpl from the Start menu search or run box (Win+R)
  • Navigate to Advanced –> Settings –> Performance –> Advanced –> Change. From this screen you can change the paging file size, set the system to not use a paging file at all, or just leave it up to Windows to deal with—which is what I'd recommend in most cases.


2.  Perform Disk Cleanup
  • Right click (C:) click Disk Cleanup
  • Click More Options
  • Click Clean up for System Restore and Shadow Copies

3.  Download Windirstat ( http://windirstat.info/ ) to discover the allocation of your hard disk space.


References:
1.  Windows has hidden about 40gb of hard disk space?
http://forum.notebookreview.com/windows-os-software/258364-windows-has-hidden-about-40gb-hard-disk-space.html

2.  Understanding the Windows Pagefile and Why You Shouldn't Disable It
http://lifehacker.com/5426041/understanding-the-windows-pagefile-and-why-you-shouldnt-disable-it

3.  Hidden file taking up 70Gb of disk space, even showing hidden system files does not reveal what or where it is.
http://answers.microsoft.com/en-us/windows/forum/windows_vista-performance/hidden-file-taking-up-70gb-of-disk-space-even/d114f350-1ac3-4a2e-8404-aee237046862

Microsoft: Modify / View Time to Live ( TTL ) on Domain Name System ( DNS ) Records

The following steps allow you to modify / view the Time to Live (TTL) on Domain Name System (DNS) records. Time to Live is used by name servers and some DNS clients to determine the length of time that a name must be cached. In Windows 2008, the default TTL for DNS records is 60 minutes. You can modify the TTL of the record in the DNS Management console.

By default, the TTL field is not displayed on records. If you want the TTL field displayed on records, enable the Advanced View feature.


Modify the TTL on a Single Record
  1. In Administrative Tools, open the DNS Management console.
  2. Click View.
  3. Click to select the Advanced check box.
  4. Right-click the record that you want to modify. The record displays a TTL field that you can modify.

Modify the Default TTL for an Entire Zone

  1. In Administrative Tools, open the DNS Management console.
  2. Click View.
  3. Click to select the Advanced check box.
  4. Right-click the zone that you want to modify. The zone record displays a Minimum Default Zone TTL field that you can modify. The default for a new zone is 60 minutes.

Control the Caching Time on a Client Computer
  • Start Registry Editor (Regedt32.exe).
  • Locate and click the following key in the registry:    
          HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Dnscache\Parameters
  • On the Edit menu, click Add Value, and then add the following registry value:
          Value name: MaxCacheEntryTtlLimit
          Data type: REG_DWORD
          Radix:
          Value data: 0x1-0xFFFFFFFF (seconds)
          Quit Registry Editor.


Reference:
HOW TO: Modify Time to Live on Domain Name System Records
http://support.microsoft.com/kb/297510

Friday 14 March 2014

IT Technology: Free TeraBytes ( TB ) Cloud Storage

Absolutely free Terabytes cloud storage for you to grab. These are Chinese/Pacific Rim based companies so if that bothers you, you haven't heard of the NSA. With the USA having given up the last breath of honest governance there is no disadvantage in embracing the Chinese cloud storage give away. Understand that if your data is of the type that it might ruffle the feathers of governance it is best to have that data in a country that you do not reside.  And, of course, you can always encrypt before uploading. 

For those of you concerned about installing the PC client and mobile device software I have noticed that removing the software from my phone and/or computer once it has been downloaded, installed and used to sign in to my account does not reduce my increased cloud storage capacity.

Of note, according to the translations the Chinese think of it, not as a "cloud", but as a "network drive".  Get it free while you can...

What follows is a quick synopsis of three cloud solutions.


Baidu (2TB)

Baidu offers 2TB free for life ... but only after one downloads, installs and signs into their phone app.  After that one must still claim the 2000GB.  Unfortunately the website, the desktop client nor the apps are offered in English.  That will change soon.

Here is the best guide I've seen to your 2000GB of Baidu cloud storage. 

Of note, Baidu seems to be the only player of the three in this covered in this post that currently offers uploading via torrent and streaming of your movies. 

Also to be noted is the fact that while I have claimed my 2TB from Baidu I have not used this network storage offering much as I find the interface a bit cluttered.


Weiyun (Tencent) (10TB)
Although I have never been exposed to them until now, Tencent is apparently a huge tech company in SE Asia, like one of the top 5 in the world. That size allows Tencent (like Mega) to store data in multiple countries, not just China.  There is some confidence to be had in that.

One may upload file no larger than 4GB unless one engages the "power upload plugin" at which point the max. file size increases to 32GB.

Here are the steps required to get your 10TB
  1. Register in the QQ service
  2. Install Tencent Cloud app for your smartphone. Login with the data from step 1.
  3. Click “Get 10TB”.
  4. And don't forget to use the  English version of  Tencent's web offering.

Qihoo 360 (36TB)
Qihoo360 is currently the MacDaddy of cloud storage with 36TB.   To get hold of all 36TB, you must download the PC client & the Android or iOS app.  Downloading and signing in to the phone app alone will net you 26TB but if you choose to download and sign in the PC client you will get another 10TB which gets you to 36,000GB

 That's a lot of storage but it does not stop there.  If one signs in each day and clicks on the drawing one gets the chance to increase their storage quota by  around 1/4GB to around 7GB.  Over 30 days this netted me nearly 50GB ... not that I really needed it. Not only is there this game play, but there is another which nets me an additional 150gig/day and up. My understanding is that 360 maxes out at 150TB.

Unfortunately Qihoo is still displayed in Chinese as are the apps and the PC client.  That will change soon, though.  Patience.

For those of you cautious about installing the PC client, I flipped steps 5 and 6 choosing to only to install the mobile device software and get the 26TB for my wife's account.  She was, however,  disturbed by the related Chinese notifications so I later removed that software from her phone incurring no penalty in the process.

Here are the steps required to get your 36TB
  1. Go to http://huodong.yunpan.360.cn/xt and click the orange button.
  2. In the popup window, click the text near the bottom right corner to Register a new account.
  3. Then enter your email address and your password twice.
  4. Get 10TB-- Now go to http://huodong.yunpan.360.cn/xt to download the PC software (labeled 1), sign in with your account from steps 1 and 2 above and then go back to http://huodong.yunpan.360.cn/xt to claim your 10TB by clicking the orange button.
  5. Get 26TB-- Now repeat step number 4 but download your phone app (the QR code helps to get your phone to the right place), sign in with your phone and then claim your 26TB

Reference:
1TB?, 2TB?, 10TB, 36TB? How Many TB of Cloud Storage Space do You Need? Tencent, Yunio, Baidu, 360 and BTSync...
http://polifrogblog.blogspot.com/2014/01/1tb-2tb-10tb-36tb-how-many-tb-of-cloud.html

IT Technology: Honeypot

A honeypot is a trap set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems. Generally, a honeypot consists of a computer, data, or a network site that appears to be part of a network, but is actually isolated and monitored, and which seems to contain information or a resource of value to attackers. This is similar to the police baiting a criminal and then conducting undercover surveillance.

Honeypots can be classified based on their deployment and based on their level of involvement. Based on deployment, honeypots may be classified as:
  1. Production honeypots
  2. Research honeypots

Production honeypots are easy to use, capture only limited information, and are used primarily by companies or corporations; Production honeypots are placed inside the production network with other production servers by an organization to improve their overall state of security. Normally, production honeypots are low-interaction honeypots, which are easier to deploy. They give less information about the attacks or attackers than research honeypots do.

Research honeypots are run to gather information about the motives and tactics of the Blackhat community targeting different networks. These honeypots do not add direct value to a specific organization; instead, they are used to research the threats that organizations face and to learn how to better protect against those threats. Research honeypots are complex to deploy and maintain, capture extensive information, and are used primarily by research, military, or government organizations.

Based on design criteria, honeypots can be classified as:-
  1. Pure honeypots
  2. High-interaction honeypots
  3. Low-interaction honeypots

Pure honeypots are full-fledged production systems. The activities of the attacker are monitored by using a casual tap that has been installed on the honeypot's link to the network. No other software needs to be installed. Even though a pure honeypot is useful, stealthiness of the defense mechanisms can be ensured by a more controlled mechanism.

High-interaction honeypots imitate the activities of the production systems that host a variety of services and, therefore, an attacker may be allowed a lot of services to waste his time. According to recent research into high-interaction honeypot technology, by employing virtual machines, multiple honeypots can be hosted on a single physical machine. Therefore, even if the honeypot is compromised, it can be restored more quickly. In general, high-interaction honeypots provide more security by being difficult to detect, but they are highly expensive to maintain. If virtual machines are not available, one honeypot must be maintained for each physical computer, which can be exorbitantly expensive. Example: Honeynet.

Low-interaction honeypots simulate only the services frequently requested by attackers. Since they consume relatively few resources, multiple virtual machines can easily be hosted on one physical system, the virtual systems have a short response time, and less code is required, reducing the complexity of the virtual system's security. Example: Honeyd.


Reference:
Honeypot (computing)
http://en.wikipedia.org/wiki/Honeypot_%28computing%29

IT Technology: Getif 2.x Network Tool


Getif is a free multi-functional Windows GUI based Network Tool written by Philippe Simonet.  It is amongst other things, an excellent SNMP tool that allows you to collect and graph information from SNMP devices.  These devices include (but are by no means limited to) Windows 2000 (using the SNMP4NT or SNMP4W2K or SNMP-Informant extension agents, of course!), and other OS's as well as devices manufactured by most major network companies (i.e. Cisco, 3COM, Dlink, Nokia, etc., etc.).

GetIf is much more than an SNMP browser however, with the ability to graph OID values over time, display the device's interface information, routing and ARP tables, as well as do basic port scans, Traceroutes, NSLookups, and IP Scans.

In order to use Getif effectively, you must have SNMP installed, and know the IP address and the community name(s) of the device you want to access.  If you don't, the SNMP functions will not work.


Reference:
Getif 2.x Network Tool
http://www.wtcs.org/snmp4tpc/getif.htm

Apple: Enable SNMP on Mac OS X for PRTG Monitoring

The steps below allow you to enable the SNMP on Mac OS X for PRTG monitoring.

<< Step 1 >>
On your Mac OS machine start the Terminal, locate the snmpd.conf file under /etc/snmp/snmpd.conf and save a backup copy, maybe with this command:
mv /etc/snmp/snmpd.conf /etc/snmp/snmpd.conf.org


<< Step 2 >>
Change the snmpd.conf file with the following command:
sudo nano /etc/snmp/snmpd.conf

In our example, it looks like this:
#Allow read-access with the following SNMP Community String:
rocommunity public

# all other settings are optional but recommended.

# Location of the device
syslocation data centre A

# Human Contact for the device
syscontact SysAdmin

# System Name of the device
sysName SystemName

# the system OID for this device. This is optional but recommended,
# to identify this as a MAC OS system.
sysobjectid 1.3.6.1.4.1.8072.3.2.16



<< Step 3 >>
Start the SNMP service, e.g. with the following command:
sudo launchctl load -w /System/Library/LaunchDaemons/org.net-snmp.snmpd.plist


<< Step 4 >>
In PRTG, create a new device which represents your Mac OS machine, using the respective IP address or DNS name.
  • For this new device, perform an auto-discovery to let PRTG automatically create SNMP sensors. This will create some sensors.
  • You can further add other SNMP sensors manually. Not all measurements will be available via SNMP, but we were able to successfully create the following sensor types:
  • SNMP Traffic sensors to monitor traffic flowing through the network card(s)
  • SNMP Linux Load Average sensor (this one works although it is intended for Linux OS)
  • SNMP System Uptime to monitor how long the system has been running
  • SNMP Disk Free sensors to monitor free disk space
  • SNMP CPU Load sensor to monitor the load of each CPU
  • Several generic SNMP values such as ICMP errors, UPD datagrams, etc. which were found by the auto-discovery.


<< Step 5 >>
Since we started the SNMP service manually in Step 3, the service will not run after a reboot of the Mac OS system.

In order to start the SNMP service automatically on your Mac OS' system startup you can edit the file /etc/hostconfig.

In the file, locate the line
APPLETALK_HOSTNAME

and add the following entry before this line:  
SNMPSERVER:=-YES-

This should start your SNMP service automatically on system startup.


Reference:
How do I activate SNMP on Mac OS in order to monitor it with PRTG?
http://www.paessler.com/knowledgebase/en/topic/41843-how-do-i-activate-snmp-on-mac-os-in-order-to-monitor-it-with-prtg

IT Management: Five Tech Issues CEOs Should Worry About

ROBERT PLANT: CEOs are constantly juggling what they consider truly pressing issues on many levels, some real, some perceived. One issue is technology; they are fearful that they are the frog in the metaphorical tech saucepan, where the temperature increase is gradual enough that they don’t jump out, but that this effect, will in the end lead to their demise. However, if they consider that technology is in fact a thread that links several other pivotal areas of concern, they may feel more in control over this seemingly invisible hand.

The first of the related issues is security. It is highly unlikely that any company in the world has not in recent years been subject to a breach, by agents of state or for-profit hackers.

The second issue is to cloud or to not cloud. CEOs worry that moving to the cloud will cause increased risk from hackers and loss of control.

The third concern is data. Specifically, Big Data; CEOs understand that there is a paradigm shift under way, but like an ancient mariner heading out from port, they are concerned about where this voyage will lead, discovery of a new world with riches or will they fall off the edge.

The fourth issue is human capital. Technology has replaced humans on the assembly line; removed people from semiskilled jobs such as call centers and is currently sublimating the middle management ranks.

Finally, the fifth issue is gaining visibility into the future of technology. CEOs know that the world is not flat and that technology is the magma that moves the global tectonic plates, understanding the direction technology is going to take is key to successfully aligning their firm with the flow.

As such, CEOs should focus more on business technology and the tacit issues to which it is linked, gaining a “fivefecta” in the process and helping them avoid the frog’s fate.

Robert Plant (@drrobertplant) is an associate professor at the School of Business Administration, University of Miami, in Coral Gables, Fla.


Reference:
Five Tech Issues CEOs Should Worry About
http://blogs.wsj.com/experts/2014/03/12/five-tech-issues-ceos-should-worry-about/

Google: Google Drive Gets A Big Price Drop, 100GB Now Costs $1.99 A Month


Google today significantly dropped the prices for its Google Drive online storage service. The first 15GB of storage remain free, but 100GB now costs just $1.99 per month instead of $4.99.

Even more impressively, though, you can now get a terabyte of online storage for $9.99 a month, down from $49.99.

If you really need a lot of online storage space, you can also get 10 terabytes for $99.99 a month and then add more storage from there in 10 terabyte steps (so 30 terabytes will set you back $299.99 per month).

For most people, even a terabyte of storage should be more than enough for a long time to come, even if you store lots of high-res images on Drive. Just like before, the additional storage works across Drive, Gmail and Google+ Photos.

You can now sign up for these new plans here and if you’ve already subscribed, you’ll automatically be subscribed “to a better plan at no additional cost,” Google says in the announcement today.

Google’s new prices significantly undercut those of many of its competitors in this space. Dropbox, for example, charges $9.99 a month for 100GB. Paid plans for Microsoft’s OneDrive, which offers 7GB of free storage, start at $25 per year for 50GB of storage and 100GB costs $50. Google, as far as I’m aware, does not offer any discounts for pre-paying for an annual plan.


References:
1.  Google Drive Gets A Big Price Drop, 100GB Now Costs $1.99 A Month
http://techcrunch.com/2014/03/13/google-drive-gets-a-big-price-drop-100gb-now-costs-1-99-a-month/

2.  Storage plan pricing
https://support.google.com/drive/answer/2375123?hl=en

Thursday 13 March 2014

Apple: Get Memory Utilization From Your Mac With UNIX Command

The UNIX command below allows you to get the memory utilization from your MacBook.
#!/bin/bash

free_blocks=$(vm_stat | grep free | awk '{ print $3 }' | sed 's/\.//')
free=$(($free_blocks*4096/1048576))
used=$(((8000-$free)))
echo "0:$used:OK"
* Note: The output from the script can be read by Paessler PRTG.

Apple: Get Values of Properties From PLIST File With UNIX Command

The UNIX command below allows you to get the value of the properties from a PLIST file.
#!/bin/bash -e

Data=`defaults read /Library/Server/Caching/Logs/LastState | grep "TotalBytesFromOrigin" | awk '{print $3}'|cut -d ';' -f 1`
echo "0:$Data:OK"
* Note: The output from the script can be read by Paessler PRTG.

Apple: Get Total Sent Traffic From Your Mac With UNIX Command

The UNIX command below allows you to get the total sent traffic from your MacBook.
 #!/bin/sh

DataOut=`netstat -bi | grep -v Obytes | grep -v "-" | grep "^en" | awk '{ x += $10 } END { print x }'`
DataOut2=`netstat -bi | grep -v Obytes | grep -v "-" | grep "^lo" | awk '{ x += $6 } END { print x }'`
DataTol=( $DataOut + $DataOut2 )

echo "0:$DataTol:OK"
* Note: The output from the script can be read by Paessler PRTG. 

Microsoft: The Server-side Trace: What, Why, and How

Any time you open SQL Server Profiler and run a trace, you're running a client-side trace. Even if you open SQL Profiler on the server and run it there, it’s still client-side. To run a server-side trace, we need to create a script. If that last sentence made your stomach tighten up, don't worry...this will be completely painless.

 
What's the difference?
There's a cost to running any trace, of course. What we want to do is minimize the impact of the trace on the system, especially for long-running traces or busy production servers.

You can do a lot to reduce the impact of even your client-side traces – for example, filtering your data, limiting the events that you trace on, running short traces, and so on. But there is a significant additional cost to running client-side traces. SQL Server MVP Linchi Shea walks us through a very illuminating benchmark test he conducted ("Performance Impact: Profiler Tracing vs. Server Side SQL Tracing", [1]) that shows the benefit of a server-side trace over client-side. Among the findings:
  • While the client-side traces dragged transaction throughput down by 10% (or more), there was little to no difference between NO trace and a server-side trace.
  • A Profiler trace can consume a significant amount of network bandwidth (in this test, a minimum of 35%, and sometimes 70% or more of the 100Mbps network). The server-side trace consumes no network bandwidth, of course...it runs on the server!

Some of you savvy DBAs and developers will have already spotted another benefit over client side traces: flexibility. Scripting your traces allows you to automate and customize to your heart's content, and even schedule traces with SQL Agent. Yet another plus is that you can keep your trace defined on the server; if you often have to run a trace for a particular event, like diagnosing a prolonged spike in CPU usage, you can just turn that trace on with a single command.


How do I start?
Let's walk through the process of creating a server-side trace.
  1. Open up Profiler and create a new trace.
  2. Select Save to File and select a location (it doesn’t matter where, we will be changing this). Select Enable File Rollover.
  3. Select Enable Stop Time (again, the actual time doesn’t matter, as we will change this later).
  4. Choose your events and columns from the Events Selection tab.
  5. Run the trace and then stop it right away.
  6. From the File menu, choose Export > Script Trace Definition > For SQL Server 2005 (or whichever is appropriate in your environment) and save the script to file.
  7. Open your file in SSMS, making sure you’re connected to the instance you want to profile.

/****************************************************/
/* Created by: SQL Server 2008 Profiler             */
/* Date: 09/01/2009  10:29:30 PM         */
/****************************************************/


-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @DateTime datetime

set @DateTime = '2009-09-01 23:28:03.000'
set @maxfilesize = 5

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

exec @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere', @maxfilesize, @Datetime
if (@rc != 0) goto error

-- Client side File and Table cannot be scripted

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on


What does it all mean?
Let's explore and edit our script.

Variables: @DateTime is the trace stop time. Edit the stop date and time, or set it to NULL for no stop time. Or if you want to impress your friends, set your script up to always give you a set duration trace (in this case, two hours) with @DateTime = DateAdd(h, 2, GetDate())

You can also change the @maxfilesize (which applies to your trace output files) here at the top of your script.

On line 22 you see an exec sp_trace_create statement, which creates but does not start your new trace definition.
  • You'll want to change "InsertFileNameHere" to location on the server you’re tracing, or on a drive or share it can reach. For example, "\\mycomputer\sharedFolder\TraceName1". SQL will append the .TRC extension to whatever filename you provide.
  • The 0 in your sp_trace_create parameter list represents the @option input. Set @option to 2 for trace file rollover. There’s also an 8 option for blackbox (see BOL) that’s not compatible with the other options, and a 4 option that will shut down SQL Server if your trace fails (I recommend against this).
  • If you want to limit the number of files the trace creates - for example, to 10 files - add 10 to the end of the parameter list . This won’t stop the trace after it creates the number of files given for @ filecount. Instead, when the trace creates a new file, it deletes the oldest file for this trace. So if you start off with Trace_1, Trace2 … Trace_10, and then it creates Trace_11, Trace_1 will be deleted. This keeps you from filling up your hard drive with trace files.

So we wind up with something like this:

-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
declare @DateTime datetime

set @DateTime = @DateTime = DateAdd(h, 2, GetDate())
set @maxfilesize = 25

exec @rc = sp_trace_create
    @TraceID output,
    2,
    N'\\Someplace\MyCoolShare\TraceFile2009',
    @maxfilesize,
    @Datetime,
    20                -- @filecount
if (@rc != 0) goto error

Next up is a big section of sp_trace_setevent statements.

exec sp_trace_setevent @TraceID, 14, 1, @on
exec sp_trace_setevent @TraceID, 14, 9, @on
exec sp_trace_setevent @TraceID, 14, 6, @on
exec sp_trace_setevent @TraceID, 14, 10, @on
exec sp_trace_setevent @TraceID, 14, 14, @on
...

These add your selected trace events (such as SQL:Statement Completed and Deadlock Graph) and columns (like TextData) to the trace. Clearly, the easiest way to create this list is to select the events you want in Profiler, before you export the trace script. But it can also be useful to look up which events are represented by which event numbers, so if you want to recreate this trace in the future with more or fewer events, you can just add new events or comment out others. You can find the codes for Profiler events and data columns in Books Online, "Describing Events by Using Data Columns".

If you set any filters, you will see a sp_trace_setfilter command for each just below the sp_trace_setevent section. Here is one example that filters out rows for the trace (@TraceID) where AppName (value 10) is NOT LIKE (value 7) N’SQL Server Profiler...’.

exec sp_trace_setfilter @TraceID, 10, 0, 7, N'SQL Server Profiler'

The third parameter (value 0) is the AND operator (the value 1 means OR)...this would come into play if you had other filters. While you certainly can go look up the values you need and set more filters this way, I find it simpler to set your filters in Profiler before you create the trace script.


Can we start the trace now?
Yes, I was just coming to that. The line of code that actually starts your trace is "exec sp_trace_setstatus @TraceID, 1" . I will say that before you start your trace, it’s a good idea to add your stop commands to the bottom of the script. Just like driving a car, you really need to know how to stop before you can go.

 -- sp_trace_setstatus  @traceid =  2,  @status =  0    -- Trace stop
-- sp_trace_setstatus  @traceid =  2,  @status =  2   -- Trace delete
-- SELECT * FROM sys.fn_trace_getinfo(0) ; -- Get info on all server-side traces

Now I have @traceid = 2 in both of these commands, but your trace won't necessarily have an ID of 2. When you run your trace, make note that the returned value is your trace ID. Then change the @traceID in your two sp_trace_setstatus lines to match the traceID returned by your script, and save your script! sp_trace_setstatus...@status=0 stops the trace. Even after you stop the trace, the trace script itself is out on the server. If you choose, you can close and delete the trace from the server with sp_trace_setstatus...@status = 2. You can stop your trace manually, or you can just wait for your stop time to roll around (if you set one). After your trace has stopped, you can go get the trace output files from the directory you specified in the sp_trace_create statement. That’s all there is to it!


Reference:
The Server-side Trace: What, Why, and How
http://www.toadworld.com/platforms/sql-server/w/wiki/10400.the-server-side-trace-what-why-and-how.aspx

Microsoft: Optimize for Ad Hoc Workloads - Server Configuration Option

The optimize for ad hoc workloads option is used to improve the efficiency of the plan cache for workloads that contain many single use ad hoc batches. When this option is set to 1, the Database Engine stores a small compiled plan stub in the plan cache when a batch is compiled for the first time, instead of the full compiled plan. This helps to relieve memory pressure by not allowing the plan cache to become filled with compiled plans that are not reused.

The compiled plan stub allows the Database Engine to recognize that this ad hoc batch has been compiled before but has only stored a compiled plan stub, so when this batch is invoked (compiled or executed) again, the Database Engine compiles the batch, removes the compiled plan stub from the plan cache, and adds the full compiled plan to the plan cache.

Setting the optimize for ad hoc workloads to 1 affects only new plans; plans that are already in the plan cache are unaffected.

The compiled plan stub is one of the cacheobjtypes displayed by the sys.dm_exec_cached_plans catalog view. It has a unique sql handle and plan handle. The compiled plan stub does not have an execution plan associated with it and querying for the plan handle will not return an XML Showplan.

Trace flag 8032 reverts the cache limit parameters to the SQL Server 2005 RTM setting which in general allows caches to be larger. Use this setting when frequently reused cache entries do not fit into the cache and when the optimize for ad hoc workloads Server Configuration Option has failed to resolve the problem with plan cache.


* Note:
Trace flag 8032 can cause poor performance if large caches make less memory available for other memory consumers, such as the buffer pool.


Reference:
optimize for ad hoc workloads Server Configuration Option
http://technet.microsoft.com/en-us/library/cc645587.aspx

APC: Disabling APC UPS Audible Alarm Tones

Please follow the steps below to disable the APC UPS audible alarm tones.

Step 1:
Connect the UPS's RJ45 to USB communications cable between the UPS and the computer. Ensure the USB cable is inserted directly into one of the host computer's native USB ports. Do not use a hub or after market USB cards to establish communications.

Step 2:
Once the computer has detected the ""New Hardware Device"", install the PowerChute Personal Edition software onto a currently supported operating system.

Step 3:
Once the software has been properly installed, the UPS's audible alarms can be disabled by accessing the "Notifications Configuration" section found under the "Configurations" Tab and altering the "Battery Back-UPS Alarm" section. Here the alarms can be enabled for all events, disabled for all events, disabled during specific UPS conditions or during customer specified time periods.


Reference:
Why might my APC Back-UPS Product be beeping?
http://www.schneider-electric.us/sites/us/en/support/faq/faq_main.page?page=content&country=ITB&lang=en&id=FA158827&redirect=true

Wednesday 12 March 2014

Apple: Get Values from PLIST File, Save The Values In A Text File and Restart Caching Service of Mac OS X Server with AppleScript

The AppleScript below allows you to:
1.  Get the values of properties from a PLIST file.
2.  Save the values of properties in a text file.
3.  Restart the caching service of Mac OS X Server
set the plistfile_path to "/Library/Server/Caching/Logs/LastState.plist"
tell application "System Events"
    set p_list to property list file (plistfile_path)
    set byte to text of property list item "TotalBytesReturned" of p_list
    set request to text of property list item "TotalBytesRequested" of p_list
    set peer to text of property list item "TotalBytesFromPeers" of p_list
    set origin to text of property list item "TotalBytesFromOrigin" of p_list
    set myDate to date string of (current date)
    set myTime to time string of (current date)
    set myDateTime to myTime & " " & myDate
end tell

try
    set theFile to (((path to desktop) as text) & "OUTPUT:" & "TotalBytes.txt") as alias
    set N to open for access theFile with write permission
    get eof N
    if result > 0 then
        set theText to read N
        set eof N to 0
        write myDateTime & return & return & "Total Bytes Returned" & return & byte & return & return & "Total Bytes Requested" & return & request & return & return & "Total Bytes From Peers" & return & peer & return & return & "Total Bytes From Origin" & return & origin & return & return & return & return & return & return & theText to N
    end if
    close access N
end try

do shell script "Applications/Server.app/Contents/ServerRoot/usr/sbin/serveradmin stop caching" password "Weneed2prepare@2012" with administrator privileges
delay 60
do shell script "Applications/Server.app/Contents/ServerRoot/usr/sbin/serveradmin start caching" password "Weneed2prepare@2012" with administrator privileges

Apple: Triggering AppleScripts from Calendar Alerts in Mac OS X

AppleScripts are great tools for increasing your daily productivity. They're even better when they can be set to run unattended, at night, on weekends or during downtime. In Lion, iCal included a handy option for attaching a script to a calendar event. Just create an event, add a Run Script alarm, point it to the desired script and you're good to go. Things changed in Mountain Lion, though. Presumably for security reasons, the Run Script alarm option was removed from the Calendar app. Despite its removal, however, there are still some ways you can trigger scripts from Calendar events.

iCal event alarm choices in OS X 10.7 Lion

Calendar event alarm choices in OS X 10.8 Mountain Lion


Reference:
Triggering AppleScripts from Calendar Alerts in Mountain Lion
http://www.tuaw.com/2013/03/18/triggering-applescripts-from-calendar-alerts-in-mountain-lion/

Apple: Restart A Mac OS X Server Service With AppleScript

With the AppleScript below, you can restart any services of the Mac OS X Server. The AppleScript below particularly will stop and start the mail service every hour.

You can copy and paste it into AppleScript Editor, compile it and save it as an application. You can then run it. You need to run it from an administrator account and keep that user logged in.

If once an hour (every 3600 seconds) is too often, just change 3600 to however many seconds you want.

Be sure to change AdminPassword with the Admin user's actual password.
repeat
 
do shell script "/Applications/Server.app/Contents/ServerRoot/usr/sbin/serveradmin stop mail" password "AdminPassword" with administrator privileges

delay 60


do shell script "/Applications/Server.app/Contents/ServerRoot/usr/sbin/serveradmin start mail" password "AdminPassword" with administrator privileges


delay 3600


end repeat

Reference:
Mavericks mail server stops distributing group email after a few hours of usage
https://discussions.apple.com/message/24069163#24069163

Apple: Read Values Of Properties From PLIST File Through AppleScript

The AppleScript below allows you to read the values of the properties from a PLIST file. The example below particularly let you read the bundle version from info.plist.
set the plistfile_path to "/Users/................/AppName-Info.plist"

tell application "System Events"
set p_list to property list file (plistfile_path)
set releaseVersion to value of property list item "CFBundleVersion" of p_list
end tell

Reference:
Read values of properties from plist through AppleScript !
http://iphonenativeapp.blogspot.com/2013/09/read-values-of-properties-from-plist_30.html

Apple: Add A Line Of Text Above The Existing First Line Of A Text File With AppleScript

The AppleScript below allows you to add a line of text ( eg. This is line 2 ) above the existing first line of a text file named test.txt in a folder called OUTPUT.
try

    set theFile to (((path to desktop) as text) & "OUTPUT:" & "test.txt") as alias

    set N to open for access theFile with write permission

    get eof N

    if result > 0 then

        set theText to read N

        set eof N to 0

        write "This is line 2" & return & theText to N

    end if

    close access N

end try

Reference:
Applescript: to open a file, add a line of text at the top, and then close the file.
https://discussions.apple.com/message/18858350#18858350

Apple: Get Date and Time with AppleScript

<< Current Date >>

This Applescript gets the current date: current date.

Just type those 2 words into a blank Script Editor document and click the Run button. In the Result pane at the bottom of the window you’ll see a line like date "Monday 16 August 2010 20:47:32 ".

The result includes the word ‘date’ and the speechmarks.

If all you wanted was the first part, Monday 16 August 2010, then you need to work with it a bit. You need to get the date string.

To get the time portion, 20:47:32 you’d get the time string.

Note: the symbol ¬ shows where you must press the Return key to make a new line.


set myDate to date string of (current date) ¬
myDate

set myTime to time string of (current date) ¬
myTime

set myWords to myTime & " " & myDate ¬
myWords


The result of that sequence of commands is the line "20:47:32 Monday 16 August 2010".


<< Tomorrow's Date >>

The date itself is stored as a bunch of numbers. Because they’re numbers you can do maths operations like add and subtract. Once you get the string the date becomes a bunch of words.

So, to get tomorrow’s date (I don’t need to care about the time), I need to find the Current Date and add 60 seconds * 60 minutes * 24 hours, or as I write it: (24*60*60).

set Tomorrow to (current date) + (24 * 60 * 60) ¬

set myTomorrow to (date string of Tomorrow)


The result of that sequence of commands when I run it on Tuesday, 17 August 2010 is the line "Wednesday 18 August 2010".


<< Short Date >>

In the file where I keep a record of each item I use I want the date like this: 20100817. That format would allow me to sort easily.

To get that format I need to get, and then work with, the Short Date in Applescript.

The Short Date string alone gets something like this: "18/08/2010", but that’s not quite what I want.

To turn the date around I have to pick out each bit, using code like ((items 4 through 5 of ShortDate) as string). Items 4 through 5 net me the 08 portion.

Here’s the code from my script:

set myTomorrow to ((current date) + (24 * 60 * 60)) ¬

set ShortDate to short date string of myTomorrow ¬

set stampText to "20" & ((items 9 through 10 of ShortDate) as string) & ((items 4 through 5 of ShortDate) as string) & ((items 1 through 2 of ShortDate) as string)

Out of all that I get stampText, which is the string I need in my preferred format: 20100817.


Reference:
Today and Tomorrow in Applescript
http://knowit.co.nz/2010/08/today-and-tomorrow-in-applescript

Tuesday 11 March 2014

Apple: Mac OS X Server Software Update


The Software Update service is Apple's equivalent of Microsoft's Windows Server Update Services (WSUS). Your OS X server downloads updates directly from Apple's software update servers. Then, using Profile Manager, you point your Mac clients toward the local update server and they get their updates from you instead of from Apple, saving Internet bandwidth and increasing the speed of large downloads.

When set to Automatic, the service will automatically publish new updates to your Mac clients as they're made available from Apple. Selecting Manual gives you the option to hold back updates for testing before pushing it out to all of your clients. Anyone who has ever installed a new OS X point update on the day it's made available knows that you're taking a certain amount of risk by doing so, and holding all but the most critical security updates for at least a few days makes some sense if you're trying to reduce support calls.

The Software Update service can update all of the same things that Apple's servers can, including Mac firmware updates; updates for Safari, iTunes, and other Apple app updates not handled through the Mac App Store (you can use the Caching service to handle updates for those); and system updates for OS X versions reaching all the way back to 10.4. A full copy of Apple's update catalog is going to require several gigabytes of hard drive space. The ability to download and distribute iOS updates from your local server still isn't included.

There are also a few other limitations here compared to something like WSUS. While you can hold updates back from your users, there's no way to push them out. Once you've approved an update, your users can pull it down through the normal Software Update process, but you can't mandate that the update be installed and there's no way to check update compliance throughout your organization. If your users choose to defer the updates, there's really not much you can do about it. The best way to skirt this limitation is to use the Software Update service in concert with a management tool like Apple Remote Desktop, which can force update checks and install manually or on a schedule of your choosing.

Additionally, there's no way to approve updates for certain groups or individuals while holding them back from other groups and individuals, functionality that WSUS has because of its tight Active Directory integration. Like many of OS X Server's services, Software Update is useful in a home with many Macs or in a small business with Macs numbering in the low-to-mid double digits, but organizations with hundreds or thousands of Macs to manage may find that it doesn't scale particularly well.
Areas of overlap

If you're running the Software Update service and the Caching service on the same server at the same time, there are a couple of things to keep in mind. First, since both services will cache system updates, you might end up storing the same update multiple times; OS X point updates are regularly over a gigabyte in size, so this could add up over time. However, since the Caching service only downloads things you and your users actually need, you won't have to waste gigabytes of space on the ancient OS X updates that Software Update will download in Automatic mode.

Finally, Software Update gives you the ability to hold back certain updates for testing if you'd like, while Caching caches and serves everything without restriction. The same set-it-and-forget-it configuration that makes the Caching service so easy to start using also makes it difficult to live with if you need more granular or advanced controls.


Reference:
Software Update
http://arstechnica.com/apple/2013/12/a-power-users-guide-to-os-x-server-mavericks-edition/6/

Apple: Mac OS X Server Caching

These days, new services get introduced in OS X Server during point releases. OS X now has a Software Caching server built to make updates faster. This doesn’t replace Apple’s Software Update Server mind you, it supplements. And, it’s very cool technology. “What makes it so cool” you might ask, given that Software Update Server has been around for awhile. Namely, the way that clients perform software update service location and distribution with absolutely no need (or ability) for centralized administration.

Let’s say that you have 200 users with Mac Minis and an update is released. That’s 200 of the same update those devices are going to download over your Internet connection, at up to 2 to 3 gigs per download. If you’re lucky enough to have eaten at the Varsity in Atlanta, just imagine trying to drink one of those dreamy orange goodnesses through a coffee stirrer. Probably gonna’ be a little frustrating. Suck and suck and suck and it’ll probably melt enough to make it through that straw before you can pull it through. For that matter, according to how fast your Internet pipe is, there’s a chance something smaller, like an update to Expensify will blow out that same network, leaving no room for important things, like updates to Angry Birds!

Now, let’s say you have an OS X Server running the new Caching service. In this case, the first device pulls the update down and each subsequent device uses the WAN address to determine where the nearest caching service is. If there’s one on the same subnet, provided the subnet isn’t a Class B or higher, then the client will attempt to establish a connection to the caching service. If it can and the update being requested is on that server then the client will pull the update from the server once the signature of the update is verified with Apple (after all, we wouldn’t want some funky cert getting in the way of our sucking). If the download is stopped it will resume after following the same process on a different server, or directly from Apple. The client-side configuration is automatic so provides a seamless experience to end users.

Pretty cool, eh? But you’re probably thinking this new awesomeness is hard as all heck to install. Well, notsomuch. There are a few options that can be configured, but the server is smart enough to do most of the work for you. Before you get started, you should:
  • Be running Mountain Lion with Server 2.2 or better.
  • Install an APNS certificate first, described in a previous article I wrote here.
  • Have an ethernet connection on the server.
  • Have a hard drive with at least 50GB free in the server.
  • The server must be in a Class C or smaller LAN IP scheme (no WAN IPs can be used with this service, although I was able to multihome with the WAN off while configuring the service)

Once all of the requirements have been met, you will need to install the actual Caching Service. To do so, open Server.app from the /Applications directory and connect to the server with which you would like to install the Caching service.

Click on Caching from the SERVICES section of the Server sidebar. Here, you have 3 options you can configure before starting the service. The first is which volume with which to place updates. This should typically be a Pegasus or other form of mass storage that is not your boot volume. Use the Edit… button to configure which volume will be used. By default, when you select that volume you’ll be storing the updates in the Library/Server/Caching/Data of that volume.


The next button is used to clear out the cache currently used on the server. Click Reset and the entire contents of the aforementioned Data directory will be cleared.

Next, configure the Cache Size. Here, you have a slider to configure about as much space as you’d like, up to “Unlimited”. You can also use the command line to do some otherwise unavailable numbers, such as 2TB.

Once you’ve configured the correct amount of space, click on the ON button to fire up the service. Once started, grab a client from the local environment and download an update. Then do another. Time both. Check the Data folder, see that there’s stuff in there and enjoy yourself for such a job well done.

Now, let’s look at the command line management available for this service. Using the serveradmin command you can summon the settings for the caching service, as follows:

sudo serveradmin settings caching

The settings available include the following results:

caching:ReservedVolumeSpace = 25000000000
caching:SingleMachineMode = no
caching:Port = 0
caching:SavedCacheSize = 0
caching:CacheLimit = 0
caching:DataPath = "/Volumes/Base_Image/Library/Server/Caching/Data"
caching:ServerGUID = "FB78960D-F708-43C4-A1F1-3E068368655D"
caching:ServerRoot = "/Library/Server"


Don’t change the caching:ServerRoot setting on the server. This is derived from the root of the global ServerRoot. Also, the ServerGUID setting is configured automatically when connecting to Apple and so should not be set manually. When you configured that Volume setting, you set the caching:DataPath option. You can make this some place completely off, like:

sudo serveradmin settings caching:DataPath = "/Library/Server/NewCaching/NewData"

Now let’s say you wanted to set the maximum size of the cache to 800 gigs:

sudo serveradmin settings caching:CacheLimit = 812851086070
To customize the port used:

sudo serveradmin settings caching:Port = 6900

The server reserves a certain amount of filesystem space for the caching service. This is the only service I’ve seen do this. By default, it’s about 25 gigs of space. To customize that to let’s say, ‘around’ 50 gigs:

sudo serveradmin settings caching:ReservedVolumeSpace = 50000000000
To stop the service once you’ve changed some settings:

sudo serveradmin stop caching

To start it back up:

sudo serveradmin start caching

Once you’ve started the Caching service in OS X Server and familiarized yourself with the serveradmin caching options, let’s look at the status options. I always use fullstatus:

sudo serveradmin fullstatus caching

Returns the following:

caching:Active = yes
caching:state = "RUNNING"
caching:Port = 57466
caching:CacheUsed = 24083596
caching:TotalBytesRequested = 24083596
caching:CacheLimit = 0
caching:RegistrationStatus = 1
caching:CacheFree = 360581072384
caching:StartupStatus = "OK"
caching:CacheStatus = "OK"
caching:TotalBytesReturned = 24083596
caching:CacheDetails:.pkg = 24083596


The important things here:
  • An Active setting of “yes” means the server’s started.
  • The state is “STARTED” or “STOPPED” (or STARTING if it’s in the middle).
  • The TCP/IP port used 57466 by default. If the caching:Port setting earlier is set to 0 this is the port used by default.
  • The CacheUsed is how much space of the total CacheLimit has been used.
  • The RegistrationStatus indicates whether the server is registered via APNS for the service with Apple.
  • The CacheFree setting indicates how much space on the drive can be used for updates.
  • The caching:TotalBytesRequested option should indicate how much data has been requested from clients while the caching:TotalBytesReturned indicates how much data has been returned to clients.

Look into the /Library/Server/Caching/Config/Config.plist file to see even more information, such as the following:

<key>LastConfigURL</key>
<string>http://suconfig.apple.com/resource/registration/v1/config.plist</string>
<key>LastPort</key>
<integer>57466</integer>
<key>LastRegOrFlush</key>
<date>2012-12-16T04:33:13Z</date>


There are also a number of other keys that can be added to the Config.plist file including CacheLimit, DataPath, Interface, ListenRanges, LogLevel, MaxConcurrentClients, Port and ReservedVolumeSpace. These are described further at http://support.apple.com/kb/HT5590.

As you can see, this provides the host name of the server and path on that server that the Caching server requires access to, the last port connected to and the last date that the contents were flushed.

In the Data directory that we mentioned earlier is a SQLite database, called AssetInfo.db. In this database, a number of files are mentioned. These are in a file hierarchy also in that Data directory. Client systems access data directly from that folder.

Finally, the Server app contains a log that is accessed using the Logs option in the Server app sidebar. If you have problems with the service, information can be accessed here (use the Caching Service Log to access Caching logs).


The Caching Service uses the AssetCache service, located at

/Applications/Server.app/Contents/ServerRoot/usr/libexec/AssetCache/AssetCache

then starts as the new user _assetcache user. It’s LaunchDaemon is at

/Applications/Server.app/Contents/ServerRoot/System/Library/LaunchDaemons/com.apple.AssetCache.plist

When a supported client device (OS X 10.8.2+, iOS 7+) checks for updates, it first checks with Apple as it would without an available Caching server. This makes sense as Caching does not replicate a full catalog of updates from Apple. When an update is requested, the device requests it directly from the Caching server, not knowing if the Caching server has a local copy or not. If Caching doesn't have a local copy it starts to download the update from Apple, and almost immediately starts sending what it has downloaded thus far back to the device. Otherwise, the local copy is returned to the device.

I first guessed that devices would request updates from Apple and be redirected either with standard HTTP redirection or a properiority payload. But given that the device goes directly to the Caching server for content, I believe that when the device checks for updates with Apple, Apple matches the public IP and supported local address range in the Caching server registration with the device's addresses, and provides URLs for updates that point directly to the Caching server.

A real example of a URL used when requesting an update from the Caching server looks this:
http://192.168.78.101:50856/us/r1000/010/Purple/v4/1f/9c/e1/1f9ce1e6-67e3-3bdd-0842-6ff3a0f515cd/mzps4900982890482523744.pkg?source=a1775.phobos.apple.com

Let's break this down:
http - Even although Server.app generates a self-signed certificate, Caching uses HTTP not HTTPS for serving content
192.168.78.101 - The local IP address of the Caching server
50856 - The TCP port that Caching server is using for HTTP
/us/r1000/010/Purple/v4/1f/9c/e1/1f9ce1e6-67e3-3bdd-0842-6ff3a0f515cd/mzps4900982890482523744.pkg - The URI reference to the content on the Caching server. I'm guessing it mirrors URIs on Apple's CDN servers. (This URI will be important in the next section.)
source=a1775.phobos.apple.com - Lastly, Apple appends a query string to the URL to let Caching know which Apple server to pull the content from if it needs to.


References:
1. The New Caching Service In OS X Server
http://krypted.com/mac-security/the-new-caching-service-in-os-x-server/

2.  Caching Server 2
http://fraserhess.blogspot.com/2013/11/caching-server-2.html

3.  A power user’s guide to OS X Server, Mavericks edition
http://arstechnica.com/apple/2013/12/a-power-users-guide-to-os-x-server-mavericks-edition/6/

Apple: Car Play


Apple's much anticipated CarPlay feature, formerly known as iOS in the Car, got its first public airing at the 2014 Geneva auto show. Volvo and Ferrari were offering demonstrations, but I took advantage of Mercedes-Benz, which had CarPlay implemented in a C-class.

CarPlay is a means of mirroring an iPhone's apps and functions through a car's dashboard. Although the CarPlay screens look the same from car to car, the control paradigms can vary. For example, the Volvo concept car at the show with CarPlay used a touch screen, but the C-class had Mercedes-Benz's indirect COMAND controller. All of the CarPlay functions in this demonstration were accessed using Siri voice command and the COMAND dial set into the car's console.

The icons on the main screen showed the same design as those in iOS but were a little larger for easier viewing while on the road. There were also fewer of them. This initial implementation of CarPlay has navigation, audio, phone, messaging, and three third-party apps: Spotify, Stitcher, and iHeartRadio. CarPlay will also include Beats Music at launch.

Further app support is a big question for CarPlay. From what I gathered at the show, Apple will be the gatekeeper for which apps it will let into the CarPlay ecosystem, although I imagine the company will listen to any objections or requests from automakers. The apps supported are likely to be a tiny subset of those available in the iTunes app store, as Apple and automakers are going to be very sensitive to driver distraction issues.

For this demonstration, an iPhone was cabled to the car's USB port -- CarPlay does not work through a wireless connection. An Apple staffer took me through the various functions, which all showed very similar flow as in iOS, making things instantly familiar for iOS users.

Entering an address for navigation was as simple as asking Siri for a business or address. Adding a little novelty, the interface also included a list of recent addresses combined with any addresses that had been received on the iPhone's e-mail or text messages. The interface also let me enter addresses manually, but that proved tedious using the COMAND controller, because I had to choose each letter with the dial on the console.

The Apple Maps showing up on the screen were larger than those on the phone, making it easier to follow route guidance. The system uses the same routing algorithms as Apple Maps, and includes traffic information. The Apple staffer giving me the demo mentioned that the navigation employs aggressive caching so as to keep routing when the connection drops out. However, you can't initiate navigation if the car is outside of cell range.

If the car has its own GPS antenna, as the C-class did, CarPlay makes use of it to get better positioning. There is also a bit of dead reckoning programmed into Apple Maps, so a temporary loss of the GPS signal won't show the car driving through a forest or the ocean.

Making phone calls was as simple as saying a contact's name, although the interface also included an onscreen keypad. Text messaging was more interesting, the interface will not show any text messaging. For incoming or outgoing texts, Siri reads them out loud, preventing driver distraction. There is no way to compose a text other than through voice command.

The music library looks very similar to the interface in iOS, and CarPlay defaults to iTunes Radio. Instead of showing a discrete image for album art on the Now Playing screen, the interface makes the cover image a subtle background.

We also looked at the iHeartRadio interface which showed the features I expect from the app. I could see my "favorited" stations and a list based on the car's current location. However, there wasn't a means of searching for new stations and adding them to my Favorites list.

Most impressively, CarPlay worked seamlessly during the demonstration. There were lag times waiting for external data to load, but the main functions and interface were all extremely quick. In the C-class, there was also an icon on the screen that let me switch back to the car's native navigation, audio, and phone functions.

To use CarPlay, you'll need an iPhone5 or better, and you will have to wait until at least the end of the year. A Mercedes-Benz engineer told me that the company is trying to get it out in its C-class by the end of 2014, so that will likely be in a 2015 model. I also heard that the S-class uses a similar head unit as the C-class, so that model could also benefit from CarPlay. Volvo was demoing CarPlay in a concept car, but has said it will first roll out the feature in its XC90 SUV. Other companies, such as Honda, have announced CarPlay adoption but had nothing on display at Geneva.

A Mercedes-Benz spokesperson also clarified to me that the company would also likely support any Android mirroring implementation that comes out of the Open Automotive Alliance, pointing out that both iOS and Apple can coexist in its cars.


Reference:
Apple CarPlay lets iOS take over a Mercedes-Benz
http://reviews.cnet.com/car-electronics/apple-carplay/4505-3424_7-35836487.html

Apple: iOS 7.1


 Apple on Monday released an update to its iOS 7 mobile operating system -- iOS 7.1 -- that adds new features such as CarPlay and fixes bugs.

With iOS 7.1, Apple also tweaked its Siri voice assistant, iTunes Radio, and its Touch ID fingerprint sensor. The company streamlined the operating system to make it work better with the iPhone 4, made some user interface refinements, and included some stability and accessibility improvements.

The update is available immediately, and the Apple devices will alert users about it over the next week.

iOS 7.1 marks the first major update following Apple's release of iOS 7 about six months ago. Apple initially unveiled iOS 7 at its developer conference in June of last year and released the operating system in September. The software underwent a complete design overhaul, with everything from the typography and color schemes getting an update. iOS 7 also added useful features like automatic updates to make everyday use easier, AirDrop, and iTunes Radio, as well as a new control center that gives quick access to most-used features. Since the introduction of iOS 7, Apple had released five beta updates of iOS 7.1 to developers.

About 83 percent of Apple device users have downloaded iOS 7, Apple said. The operating system won't run on the original iPad from 2010 and any iPhones older than the iPhone 4.

Apple streamlined functions in iOS 7.1 to make the experience faster for iPhone 4 users, which have a much less advanced chip than Apple's newest phones and tablets.

With iOS 7, Siri recognized that a user was done talking because the person paused for a couple of seconds. Now, users can hold down the home button the entire time they're talking to Siri. Once they lift their finger, Siri knows they're done talking. Both ways of interacting are available in iOS 7.1. Apple also included new male and female voices for Mandarin Chinese, British English, Australian English, and Japanese.

iTunes Radio also got some tweaks. There's a new search field that allows users to create stations based on their favorite songs or artists. Users can buy albums with a single tap from iTunes Radio, rather than only buying singles. And for the first time, users can now subscribe to iTunes Match from their mobile devices rather than from the desktop.

In the calendar month view, users can now toggle to see daily appointments. And the Touch ID fingerprint reader became more accurate with iOS 7.1. There should be fewer false rejections as well as quicker response times in reading fingerprints.

iOS 7.1 also includes a camera update that's specific to iPhone 5S users. That's because the newer phone uses Apple's advanced A7 processor while older devices have less-powerful chips. HDR, or "high dynamic range," will automatically turn on when it's needed. That takes many photos at once in different exposures to create a sharp image that looks closer to what the human eye sees, as the varying highlights and shadows are all accounted for.

Meanwhile, iOS 7.1 users will be able to take advantage of Car Play, which Apple unveiled last week at the Geneva Motor Show. The feature is a means for an iPhone (5 and newer) to power a touch screen on a new car's dashboard. The interface is iOS-like, but vastly simplified compared with what's seen on a phone or tablet. Functionality is limited too -- really just letting users access maps and audio, though Siri can read messages and take dictation for responses.


Reference:
Apple's iOS 7.1 lands with CarPlay, improved fingerprint scanner
http://news.cnet.com/8301-13579_3-57620109-37/apples-ios-7.1-lands-with-carplay-improved-fingerprint-scanner/