Tuesday, July 11, 2023

Using SSM Automation as a makeshift web form for invoking Lambda Functions (and Step Functions, too!)

What's this article about?

It's about AWS Systems Manager Automation. This product can be seen as a poor man's Step Function which includes a basic and easy-to-use web frontend that makes it trivial to execute operational tasks that could otherwise be tedious.

The need for simple web forms

Many of our IT-related processes need to interact with various internal web services. The logic to speak with these services is implemented using Lambda functions.

Each of these Lambda functions expect a curated JSON payload with multiple keys. Building the payloads and invoking the functions from the web console or the CLI is not only time-consuming, but complex. 

Since these tasks are repetitive and not completely automatable, we would like to be able to offload them to Level 1 support personnel, while limiting training and reducing data entry errors.

Doing so requires us to provide to IT support staff a web-based interface that is more user-friendly than AWS's built-in Lambda web console. A simple web form that can validate input data upfront, provide user feedback, and format JSON payloads automatically would fit the bill, right?

The question arises, how can one make such a web form quickly, and, if possible, define it with IaC?

SSM Automation to the rescue!

The most obvious answer has been to use SSM Automation for this purpose.

Simply put, an SSM Automation can be seen as a much simpler Step Function geared towards low volume and interactive use. 

From what I can tell, this Systems Manager feature was originally designed to automate tasks related to EC2 instance management. But I found that it can invoke Lambda and Step Functions too and it can also call arbitrary AWS API functions if you require.

Key points

An SSM Automation is defined in a standard SSM document. The schema is currently at revision 0.3.

When the automation is executed, a basic web form is automatically generated based on parameters you've set in the document. Note that this form is not extremely customisable. 

This web form contains input parameters that support input validation which are similar to CloudFormation parameters (i.e. AllowValues, AllowedPattern, etc). These parameters can then be formatted in a structured payload when invoking your function.

The web form is generated based on your parameter settings.

Logs from your Lambda functions are much easier to consume in the web interface compared to going directly in Cloudwatch. Each step is logged as if it was invoked with the "tail" output option as the command-line, and saved in the execution log.


The automation log lets you check the output of your lambda functions quickly, no need to search into countless log streams in Cloudwatch!

Something cannot be done directly in an automation step? You can use inline scripts in Python or Powershell. There is no need for you to host these scripts in Lambda, it is taken care of automatically.

Oh, and everything can be defined using CloudFormation or the CDK.

There are many examples that AWS have already done themselves, and that you can see for yourself, such as this one.

Some caveats

The automation language is not as complete or as feature-rich as the one used with Step Functions and, I can assume, is not intended to be.

There is throttling in place. Don't use this for high-volume transactions.

Wrap-up

Using the SSM Automation web interface, there is no longer a need to prepare and curate JSON payloads manually and invoking Lambda or Step Functions becomes very easy to support staff.

I hope this quick article helped you get an idea of how SSM Automation can be of help with this use case.

Thursday, July 29, 2021

Using cfn_nag with the CDK

Nota bene: As I was writing this post, I noticed that CDK Labs has started work on cdk-nag which will let you analyze your work directly from within your code.

We've been using Stelligent's excellent cfn_nag for over a year to ensure that our homemade CloudFormation templates are following minimal security best practices. As we've begun transitioning to the CDK, we now need to do similar analysis on synthesized templates generated by the CDK.

First of all, up until now, it seems that cfn_nag is perfectly capable of analyzing templates synthesized from the CDK. The templates themselves are harder to read due to their nature, but the tool still works.


Running cfn_nag on synthesized templates


Running cfn_nag in a CDK workflow is fairly simple, just synthesize the template with "cdk synth" then run cfn_nag_scan on your template as you would do if creating it by hand, i.e. "cfn_nag_scan -i cdk.out/mytemplate.json". There is no more to it.


Adding exceptions in your CDK code

A small challenge for me was to find a way to embed the cfn_nag exceptions in the synthesized output. Thanks to Yan Xiao's stackoverflow post, I've been able to do this quite easily.

Please note that I'm not a typescript expert (or javascript, for the matter) so I'll show you how I've done it, but keep in mind that it might not be the best way.


Using an array to hold the exceptions

First, define empty arrays that will contain the exceptions for each of the resources in your code, for example:

    // Array containing cfn_nag exceptions for the Lambda Function
    var cfnnag_LambdaFunctionRulesToSupress = [];
    // Array containing cfn_nag exceptions for the Bucket
    var cfnnag_BucketRulesToSupress = [];

As your code flows, add the exceptions that you need in the appropriate array, for example, if you need to add an exception to your bucket, do this:

    cfnnag_BucketRulesToSupress.push({
      "id": "W35",   
      "reason": "No need for logging on this Bucket"
    });

Note here that I'm appending new exceptions to the array using push. So just push as many of these exceptions as you need.


Applying the exceptions as CloudFormation Metadata

Once you've finished defining the resources, you'll need to add the exceptions as CloudFormation metadata using the Level 1 properties. Here are a two examples.

1. This adds the exceptions as metadata to your bucket, assuming that you've pushed some exceptions in the array beforehand:

   const S3CfnBucket = this.S3Bucket.node.defaultChild as s3.CfnBucket;
    if (cfnnag_BucketRulesToSupress.length > 0 ) {
      this.S3CfnBucket.cfnOptions.metadata = { 
        "cfn_nag": {
          "rules_to_suppress" : cfnnag_BucketRulesToSupress
        }
      }
    }

2. When adding policies directly to a bucket using the addToResourcePolicy method, things get more tricky as the CDK will not embed these policies in your bucket, but will create a separate AWS::S3::BucketPolicy resource. So if you have specific cfn_nag exceptions to apply to your policies, do this to add them as metadata to the resource :

    const S3CfnbucketPolicy = this.S3Bucket.policy?.node.defaultChild as s3.CfnBucketPolicy;
    if (cfnnag_BucketPolicyRulesToSupress.length > 0 ) {
      S3CfnbucketPolicy.cfnOptions.metadata = { 
        "cfn_nag": {
          "rules_to_suppress" : cfnnag_BucketPolicyRulesToSupress
        }
      }
    }

So here it goes. Hope this helps.

Wednesday, November 13, 2019

Cross-account sharing of a PrivateLink endpoint using Private Hosted Zones and CloudFormation

Introduction

It is possible to concentrate all your PrivateLink endpoints in one account, then share them with other accounts and access them through a Transit Gateway.

This reduces the consumption of private IPs and makes everything cleaner. You also do not have to pay a hourly fee for these endpoints in each of your accounts, although you still have to consider the transit fees involved to bring the data in another VPC.

This is done using Route53 and Private Hosted Zones (PHZs). James Levine's post Integrating AWS Transit Gateway with AWS PrivateLink and Amazon Route 53 Resolver explains very clearly how you can achieve this. I'll spare the details but basically, this lets you override the DNS addresses of the endpoints within your accounts to point to your private address instead of the public one.

Go read James' article first, then come back here for implementation details.

A sample use case

The use case that made me do this initially is AWS Systems Manager. I wanted to be able to use its Session Manager feature to open interactive sessions on EC2 instances, in multiple accounts. Since I wanted to prevent routing SSM data through the internet, combined with the fact that it required many VPC endpoints, I decided to concentrate them in one account.

As documented, four VPC interface endpoints (i.e. PrivateLink endpoints) are needed for this: ssm, ssmmessages, ec2 and ec2messages. There is also a fifth endpoint for S3, but that one is a gateway endpoint and it needs to be defined in each of your accounts.

When an EC2 instance tries to communicate with the ssm endpoint, its agent looks up that endpoint's  DNS address and by default, it gets the public IP address for your region. For example:

$ nslookup ssm.ca-central-1.amazonaws.com

Non-authoritative answer:
Name:   ssm.ca-central-1.amazonaws.com
Address: 52.94.100.144

But what do you do if you have defined a PrivateLink endpoint for ssm.ca-central-1.amazonaws.com in another account, and you wish to use it through a peering connection or a Transit Gateway? James explains how to configure a DNS hosted zone to fool everything in that VPC into using a private address. A lookup in this EC2 instance will then give this result:

$ nslookup ssm.ca-central-1.amazonaws.com

Non-authoritative answer:
Name:   ssm.ca-central-1.amazonaws.com
Address: 192.168.0.10

where 192.168.0.10 is the private IP address assigned to the VPC endpoint.

This works because that DNS record is, in fact, an alias record on the regional address of your endpoint. For example, ssm.ca-central-1.amazonaws is aliased to vpce-0123456789abcdef-01234abcd.ssm.ca-central-1.vpce.amazonaws.com.

If you went further and defined your VPC in multiple AZs, you should have multiple addresses :

$ nslookup ssm.ca-central-1.amazonaws.com

Non-authoritative answer:
Name:   ssm.ca-central-1.amazonaws.com
Address: 192.168.0.10, 192.168.1.10

N.B. I'm still not sure what the impact of a failure in one AZ is in this scenario as I'm well-aware that using DNS as a failover mechanism isn't great and can involve timeouts.  I hope that any unavailable IP address will be removed dynamically from the regional address... I don't have the answer to this one.


Configuring a PrivateLink endpoint with a PHZ in CloudFormation

Here are the three resources you need to put in your CF template to deploy an endpoint and a private hosted zone. Once you have these in place, it is a matter of repeating the same code for ssmmessagesec2 and ec2messages.

Security Group
The first thing you need is to define a security group to make your endpoint accessible on the port it needs (usually TCP/443):

  ssmendpointSG:
    Type: AWS::EC2::SecurityGroup
    Properties:
      GroupDescription: "SG for SSM endpoint"
      GroupName: "SSMSG"
      SecurityGroupIngress:
        - CidrIp: 10.10.0.0/16
          IpProtocol: tcp
          FromPort: 443
          ToPort: 443
      Tags:
        - Key: Name
          Value: "My SSM Endpoint SG"
      VpcId:
        Fn::ImportValue: "VPC-id-outputvariable-from-another-template"

Note here that I've imported the VPC ID using an output variable that comes from another CF template. You can hardcode it or input it as a parameter if you prefer.

PrivateLink Endpoint
Then, you define the PrivateLink endpoint itself:

  ssmendpointVPC:
    Type: AWS::EC2::VPCEndpoint
    Properties:
      VpcEndpointType: Interface
      ServiceName: !Sub com.amazonaws.${AWS::Region}.ssm
      SecurityGroupIds:
        - !Ref ssmendpointSG
      SubnetIds:
        - Fn::ImportValue:
            !Sub "AZ1-subnet-id-outputvariable-from-another-template"
        - Fn::ImportValue:
            !Sub "AZ2-subnet-id-outputvariable-from-another-template"
      VpcId:
        Fn::ImportValue:
          !Sub "VPC-id-outputvariable-from-another-template"

Private hosted zone
As for the private zone, these two entries need to be configured (you'll need to replace the region with yours):

  phzssmcacentral1amazonawscom:
    Type: AWS::Route53::HostedZone
    Properties:
      Name: ssm.ca-central-1.amazonaws.com
      VPCs:
      - VPCId:
          Fn::ImportValue: "VPC-id-outputvariable-from-another-template"
        VPCRegion: "ca-central-1"

  phzaliasrecordssmcacentral1amazonawscom:
    Type: AWS::Route53::RecordSet
    Properties:
      AliasTarget:
        HostedZoneId: !Select [ '0', !Split [ ':', !Select [ '0', !GetAtt ssmendpointVPC.DnsEntries ]]]
        DNSName: !Select [ '1', !Split [ ':', !Select [ '0', !GetAtt ssmendpointVPC.DnsEntries ]]]
      HostedZoneId: !Ref phzssmcacentral1amazonawscom
      Name: ssm.ca-central-1.amazonaws.com.
      Type: A

Note the multiple selectors under AliasTarget. The combination of these selectors extracts specific fields from AWS::EC2::VPCEndpoint that are made available as attributes once CloudFormation deploys the endpoint, namely:

  • The hosted zone ID for the endpoint
  • The regional DNS address of the endpoint (as opposed to the one pointing to the AZ itself)

Sharing a PHZ across accounts

Once the PHZ is deployed, you need it to share it with your accounts. Unfortunately, you cannot do this with CloudFormation. The procedure is explained in the KB article How do I associate a Route 53 private hosted zone with a VPC on a different AWS account?. It explains how to do this using the CLI.

Good luck.


Thursday, October 24, 2019

Deploying a cross-account Transit Gateway using CloudFormation



Introduction

I've decided to automate the deployment of a Transit Gateway using CloudFormation.

I'll show you here how I did it, but be advised that it is currently not possible to do complex configurations on a TGW using CloudFormation. You will need to do some tasks manually, at least one which can only be done with the AWS CLI.

First, some caveats

Now there are a few caveats you need to be aware of before using CloudFormation to deploy a Transit Gateway:

  • At this time, there are no attributes whatsoever that can be extracted from GetAtt, which means you can't extract its ARN, default route table ID, and others and use them later in your template. 
  • Every (and I mean every) property change requires a replacement, which is a big deal.
    • This means that doing something as simple as trying to change a tag on your Transit Gateway using CloudFormation will cause downtime, as it requires that you first remove any attachments and dependencies before being able to update the stack.
    • It also means that the ID and ARN of the TGW itself will change once it is replaced, which requires lots of planning: any dependents that refer to these identifiers will need to be reconfigured.

tgw-main.yml : Deploying the Transit Gateway

Deploying a TGW is fairly straightforward:

Resources:
  mytgw:
    Type: 'AWS::EC2::TransitGateway'
    Properties:
      AutoAcceptSharedAttachments: enable
      Tags:
      - Key: Name
        Value: "My Transit Gateway"

Outputs:
  outmytgw:
    Description: TGW ID
    Value: !Ref mytgw
    Export:
      Name: "mytgw-id"


I've set AutoAcceptSharedAttachments to enable to prevent having to accept VPC attachments manually, as they will be done later.

I've also added an output variable. It is set so that I can then reference the TGW ID from other stacks, namely the VPC-related stacks that will attach themselves to the TGW. I suggest you export it with the name !Sub "${AWS::StackName}-mytgw-id" if you prefer prefixing it with the stack name.

Caution: Whatever you do, be sure to understand all the properties of AWS::EC2::TransitGateway and their implications. As I said earlier, you cannot change any of them once it's deployed without replacing the TGW, and removing all the dependencies below (and possibly more).

tgw-ram.yml: Sharing the TGW across different accounts (optional)

If you need to attach to the TGW from a VPC in another account, you first need to use Resource Access Manager (RAM) to share it between your accounts.

This cannot be done in the previous stack (tgw-main.yml); sharing the TGW requires getting its ARN and as explained in the Caveats section, there is no way do to that from CloudFormation. To my knowledge, it's not available in the portal either. Therefore, you first need to extract the ARN using the AWS CLI:

$ aws ec2 describe-transit-gateways

This will show you the ARN, such as:
arn:aws:ec2:xx-xxxxx-x:yyyyyyyyyyy:transit-gateway/tgw-zzzzzzzzzzz

Where:

  • xx-xxxx-x: The AWS region where the TGW is located
  • yyyyyyyy: The account number that hosts the TGW
  • tgw-zzzzz: The TGW ID.
Then, you can build your RAM template like this:

Resources:
  sharemytgw:
    Type: "AWS::RAM::ResourceShare"
    Properties:
      Name: "My TGW RAM Share"
      ResourceArns: arn:aws:ec2:my-aws-region:my-aws-account:transit-gateway/tgw-my-tgw-id
      Principals:
        - "first_account_number"
- "second_account_number"
      Tags:
        - Key: "Name"
          Value: "My TGW RAM Share"

N.B. I actually use a parameter for ResourceArns, so I don't have to hardcode the ARN in there. I've left it out to keep things simple.

Once this template is run, you need to go manually inside each account and accept the Resource Access Manager invite. There is no way, to my knowledge, of doing this within CloudFormation.

tgw-vpc-attach.yml: Attaching a VPC to the TGW

Assuming you already have a CloudFormation Template to deploy your VPCs, it is then a matter of adding this code to have them attach to the TGW:

Resources:
  vpctgwattach:
    Type: 'AWS::EC2::TransitGatewayAttachment'
    Properties:
      TransitGatewayId:
        Fn::ImportValue:
          !Sub "mytgw-id"
      VpcId: !Ref myvpc
      SubnetIds:
        - !Ref mysubnetAZ1
        - !Ref mysubnetAZ2
      Tags:
      - Key: Name
        Value: "VPC TGW attachment"

Outputs:
  outvpctgwattach:
    Description: VPC TGW Attachment ID
    Value: !Ref vpctgwattach
    Export:
      Name: "vpctgwattach-id"

See here that I refer to the output variable defined previously in tgw-main.yml in order to get the ID of the TGW (without the stack name, but this is up to you).

This is for a VPC located in the same account as the TGW; note that referencing CloudFormation output variables doesn't work across accounts, the TGW ID can then be hardcoded. There are workarounds, but from what I've seen, they involve Lambda functions and I prefer avoiding this for the moment.

The TGW needs to be attached to a subnet in each of the AZs that your VPC spans to. It doesn't matter which subnet you pick these AZs, but you need one. The attachment creates a "secret" endpoint that consumes an IP address in each subnet and all packets that go to the TGW will be routed through it.

While it could be possible to attach to that VPC directly from tgw-main.yml, I've decided not to do this, as I prefer not having to modify the main TGW template when adding new VPCs. It must also be done from within the account that owns the VPC, so I prefer keeping the attachment business out of the main template.

tgw-defaultroutetable.yml: Adding entries to the default route table

There is no way to extract the ID of the default route table from CloudFormation, so you first need to extract it using the CLI or the Portal. The value is labeled as tgw-rtb-xxxxxx where xxxxxx is the Transit Gateway ID.

Then, adding a new route is a matter of invoking AWS::EC2::TransitGatewayRoute while referring to the route table ID. I suggest you use a parameter for the route table ID, to your leisure.

Resources:
  tgwdefaultroutetableentry1:
    Type: AWS::EC2::TransitGatewayRoute
    Properties:
      DestinationCidrBlock: 0.0.0.0/0
      TransitGatewayAttachmentId:
        Fn::ImportValue:
          !Sub "mytgw-id"
      TransitGatewayRouteTableId: "tgw-rtb-xxxxxxxxxxxxx"

Notice here that I've used the export variable mytgw-id to identify my transit gateway.

Wrapping it all up

Deploying a TGW using CloudFormation and sharing it across accounts is a multi-step process:

  • Deploy the TGW using tgw-main.yml
  • Get the ARN manually using the AWS CLI (or some other way), then share it with with other accounts using tgw-share.yml
  • Go into each account and accept the share invitation.
  • Create a template to attach the VPCs named tgw-vpc-attach.yml or better, add the code of that template in your current VPC template(s).
  • Get the default route table ID using the portal (or some other way) and add route entries to the TGW using yet another template named tgw-routetable.yml

That's about it.

Wednesday, July 31, 2019

Update and thoughts on Ansible for cloud automation

Except for a few posts here and there, there hasn't been much really useful content in this blog in almost eight years! I think an update is in order.

I started this blog initially to target mostly HP-UX as I was feeling comfortable enough to post on various subjects on this operating system, and few, if anybody, blogged on HP-UX outside of the official channels, making this niche blog relevant.

Then I moved on in 2010. Since then, HP-UX itself as a platform has moved on itself, with fewer and fewer systems running.  And in the years that followed, I'll be the first to admit that it has not been easy to find a subject on which I felt good enough to blog about.

This is partly because I could not get a foothold on any particular technology. I've briefly worked as a systems architect, then came back to the technical side in 2014 by keeping Tru64 systems up and running until they got decommissioned (this was in an environment with extremely strict compliance rules -- to be honest, it wasn't very exciting). I then assisted in deploying some Windows servers (!!) in 2015-2016, along with some Red Hat Linux systems, and finally, in 2017, I've got drafted to help upgrading some Solaris 11.3 servers on a few SuperClusters. Okay, drafted is a strong word, it's a terrific and exciting platform, but sorry Solaris, seems to me that you're slowly moving on like HP-UX, too.

For a year now, I've been working on automating deployments in Azure in a new team. This is a 180 degree turn for a systems administrator, and I like it.

We're using Ansible to do this, using it to call (somewhat in preferred order):

  • native Ansible modules (when exist, and also when they don't crash)
  • AZCLI
  • REST API calls using azure_rm_resource whenever possible
  • ARM templates 
  • Powershell (last resort on a Linux host)

Is Ansible great at this job? It's been one year now, and I'm still not sure.

For starters, it takes a long time to make the code fully bullet-proof and idempotent. Furthermore, while Ansible (especially the modules) makes it easy to expect a desired state for specific Azure resources, it is harder to make a playbook that will take care of not only deploying resources, but reporting differences over time (i.e. drift management) and deleting these resources in the future when they will no longer be needed.

Terraform has been sugested many times to resolve this, but I haven't looked into it yet. Well, actually I did, but after an hour I still couldn't find out how to print "hello world" so I kind of called it quits, there is so much work to be done that side projects are kind of limited right now.

AWS seems to have got it right with Cloud Formation and stacks, a feature which, I think, is missing from ARM templates for now as ARM templates seem to be designed to be a one-time thing. I've just learned about stacks today and I'm getting excited.

To be continued!



Monday, March 5, 2018

Installing Solaris 11.4 beta on a Proliant G4


I've been trying to install Solaris 11.4 beta on an extremely old x86 server, in part because I do not have access to a scratch VMware environment and also to see if I could pull it off.

I had access to a bunch of unused HP Proliant DL360 G4s. They are listed as reported to work on the Hardware Compatibility List, so I said to myself "Why not". So I scavenged memory and CPUs and tried to install the OS.

I was able to boot the install media using a USB key, but the graphic card didn't seem to be compatible, as I got the message "Compatible fb not found". Specifying -B console=force-text didn't work, it switched to graphical mode anyway.

It took multiple tries and reboots to find a combination that worked. I found out that it is possible to install on a serial console. There are GRUB menu entries that let you boot the OS using ttya or ttyb, but they are hidden. I'm not sure how I got into this menu, but I think it was by pressing ESC at the GRUB prompt that gives you 5 seconds before booting the OS.

I attached a laptop with a serial cable to the server and ran screen in an xterm. I've been able to access the text installer sucessfully and install the OS.

My system now boots. I'm waiting for my network patch request to come through before continuing.

I'm especially interested in trying the new Solaris Analytics interface. I'll keep you posted.

Thursday, May 11, 2017

Revisiting the restricted shell

I've been administering Unix boxes since the mid-90s and I've always been told that using restricted shells (rsh, rksh, rbash) was a bad idea because they are easily hackable. Indeed, there are countless known methods to get out of a restriced shell: from finding an application that allows a shell escape, to trying to compile your own, to doing clever hacks with the history file.

I've recently been in a corner case where I was dealing with an embedded product which requires a specific set of commands and also uses some bracket commands that are difficult to wrap with our usual SSH command authenticator. So I decided to revisit using a restricted shell to jail this user and I think I managed to make the jail shatterproof enough.

Here is how I did it:

Create Bob's home directory, but assign it to root:
# mkdir /home/bob
# chown root:root /home/bob
# chmod 755 /home/bob

Force a .bashrc and .profile that changes Bob's PATH to a limited set of commands:
# echo "export PATH=/opt/arcbck/allowed_commands" > .bashrc
# ln -s .bashrc .profile

The reason for having both a .profile and a .bashrc is to ensure that this profile will be loaded both for interactive and non-interactive sessions.

If the user needs to write stuff somewhere, create a directory for Bob, e.g.
# mkdir /home/bob/writable
# chown bob home/bob/writable
# chmod 755 /home/bob/writable

Create the allowed_commands directory and put symlinks in it pointing to allowed binaries:
# mkdir /home/bob/allowed_commands
# ln -s /bin/mycmd allowed_commands/mycmd

Now you must be sure of the following:

1. Bob must NOT have any writable access to /home/bob/.profile or /home/bob/.bashrc, else he can change the PATH value
2. Bob must NOT have any writable access to /home/bob, to prevent any modification of .profile and .bashrc
3. Investigate ANY command that ends up in the allowed_commands jail to be sure that there is NO known way of executing another command from it, showing files or escaping the shell. If there are any, then forfeit giving this command or write a wrapper around it (see below).
4. See the jail escape methods linked above, log in as Bob and see if you can use them to escape the jail.

Example of a wrapper script with scp

Let's say I want to allow Bob to scp files into his account using scp's undocumented -t (i.e. -to) option. I would normally do this:
# ln -s /bin/scp allowed_commands/scp

This is wrong as scp can be coerced with -S to execute random commands.

A solution is to put the following in the allowed_commands jail instead:
lrwxrwxrwx. 1 root root   14 May  5 10:02 scp -> scp_wrapper.sh
-rwxr-xr-x. 1 root root  382 May  5 13:54 scp_wrapper.sh

With scp_wrapper.sh containing this:
#!/bin/sh
if [[ "$1" = "-t" && "$2" != "-"* ]]
then
        /bin/scp -t $2
        returncode=$?
else
        echo "scp_wrapper: Refused SCP command: '$*'"
        returncode=255
fi
exit ${returncode}

Using this wrapper, scp will only allow -t and no other option.

Good luck.