If you’ve read my blog in recent weeks, you know I’ve grown very worried about what 2008 will bring for b-to-b publishing. A few days ago, I wrote that it’s “time for b-to-b editors and publishers to build some fighting holes”-defensive positions from where they could ride out the coming onslaught of bad economic news.I promised then that I would “post some of my thoughts on what a b-to-b fighting hole looks like.” And given the news that the smartest guys on Wall Street think a recession is coming, I think today is the day for me to start discussing tactics.Let’s start with a little story. A few weeks ago I had coffee with a long-time friend and journalist. We got to talking about new media. I told him about the remarkable work being done by Rob Curley‘s team at Loudoun Extra, and I told him that he should go straight home, log on and check it out.But my friend said that he did not have an Internet connection at his home.When my shock wore off, I asked why. And my friend, who makes pretty good money, said he didn’t want to pay for Web access. “It doesn’t seem worth it,” he said.I was reminded of that conversation earlier today when an anonymous reader posted a comment to an earlier post of mine. That reader complained that”employers aren’t doing much to train their current employees and prepare them as online journalists.”That’s true, I thought. But I don’t care. I believe that journalists need to learn these skills themselves. As I said more than two years ago”… at this point, you can’t blame the boss for not teaching these things. The difficult truth is that people who can’t insert a hyperlink, who won’t read a blog, who don’t know how to work with Photoshop and can’t upload a video file just aren’t worth having around anymore.”Now, as difficult times loom, I’m taking an even harder stance. I’m urging employers not to offer any training in Web journalism. There are two reasons for this. Here they are:1. You cannot train someone to be part of a culture.For someone to work on the Web, they must be part of the Web. That, after all, is what the Web means. The Web is a web. It exists as a series of connections. An online journalist isn’t a journalist who works online. He’s a journalist who lives online. He’s part of the Web.It’s a waste of time and money to teach multimedia skills and technology to someone who hasn’t already become part of the Web. And there’s no need to teach skills and technology to the journalists who are already part of Web culture, because the culture requires participation in skills and technology.Or, to put it another way — I cannot teach the Web. No one can. Yet all of us who are part of the Web are learning the Web.2. When the fighting begins, the training must end.We had a good run. For the past few years, life has been pretty easy for b-to-b publishers that have embraced the Web. We have been an army that has known nothing but victory. But if I’m right, the easy times are over.We have moved too far, too fast. Our lines are overextended. Our advance has been halted. We are vulnerable.We cannot move backward to round up the stragglers and train them to fight. It’s too late to try to convince print journalists that the Web has value. It’s too late to tell them that an Internet connection is worth a few dollars a month. As revenue shrinks, we can’t spend money on training. We can’t gather up the print folks and “prepare them as online journalists.”You can’t prepare people to dig a fighting hole. You just tell them to dig. And the ones who don’t dig fast enough, deep enough or well enough, die.[Some readers are sure to be thinking — “Is he nuts? Isn’t training newsroom staffs part of what he does for a living?” To which I reply, “Yes. I am nuts. And I do offer training to newsroom staffs.” Odds are there’s something valuable I can offer to the staff at your publication. There are certainly non-training services I can offer your company. Send an email to inquire (at) paulconley (dot) com and we can talk about it. Just don’t ask me to teach another “writing for the Web” course. There’s no room for Web newbies in a b-to-b fighting hole.]More on this topic Getting Wired and Getting Funded Digging a Fighting Hole Looking at Journalism’s Future Reflections on Bob Krakoff Bloggers: Lawsuits Won’t Stop Publishers From Blogging Time Inc. Developing ‘Serious Training Academy’Just In Editor & Publisher Magazine Sold to Digital Media Consultant TIME Names New Sales, Marketing Leads | People on the Move This Just In: Magazines Are Not TV Networks The Atlantic Names New Global Marketing Head | People on the Move Shanker Out, Litterick In as CEO of EnsembleIQ Meredith Corp. Makes Digital-Side Promotions | People on the MovePowered by
H Street Main Street, in conjunction with H Street Corridor businesses, will host “Streetcar Stroll and Roll” events on July 26 and August 23 from 3 p.m. – 7 p.m. to promote the upcoming launch of the District’s first streetcar line and highlight the development of the H Street NE Corridor and it’s vibrant businesses.A number of streetcar style decorated pedicabs will provide the public with free rides along the H Street Corridor from the Union Station to Maryland Ave. NE—the route of the upcoming H/Benning DC Streetcar line. While enjoying the pedicab rides, participants are encouraged to visit H Street businesses, restaurants and bars, enjoy displays from local artists that will be featured along the corridor and experience the neighborhood’s unique atmosphere.The H/Benning line will be the first segment of the DC Streetcar system running 2.4 miles along H Street NE and serving residents, businesses, commuters and visitors between Union Station on the west and the Anacostia River on the east. Eventually, the H/Benning segment will be just one piece of the overall One City Line that will cross the city east to west from beyondthe Anacostia River all the way to the Georgetown waterfront and facilitate easy travel to the H Street Corridor and other areas of the District.
Beatle star Paul McCartney almost guest starred on Friends as he was offered the role of Ross’ father-in-law.Emmy-nominated casting director Leslie Litt, who was working on the NBC hit series during most of its run, revealed that McCartney, now 72, could’ve appeared in the season 4 finale of the show as David Schwimmer’s on-screen father-in-law, but he turned it down, reported Huffington Post.“I went through his manager and gave him all the details. One day, someone in the office brought me a faxed letter written to me by Paul himself! He thanked me for my interest and said how flattered he was, but it was a very busy time for him,” Litt said.If the British musician had agreed to do it, he would’ve appeared in the two-part season four finale which aired in 1998.In the said outing, Ross married Emily (Helen Baxendale) in London though he accidentally said Rachel’s (Jennifer Aniston) name instead of his bride’s name at the altar.
Kolkata: The state Power department has developed a comprehensive ‘Energy Action Plan’ in order to generate world-class electricity in Bengal.The Power department has been exploiting all its resources to ensure that the people here in the state can avail the quality of electricity that is normally found in Western countries. In its attempt to produce best quality power, the state government has focused on the renewable energy sector. A senior official of the department said through the development of an ‘Energy Action Plan’, the department aims to produce the best quality electricity, at par with the Western countries, in the next 2-3 years. Also Read – Rain batters Kolkata, cripples normal life”We are venturing into the unknown areas of renewable energy sources and in the future years, there will be a paradigm shift from conventional energy to renewable energy. We are taking all necessary steps to make the whole process more sustainable. How the grid integration will be done remains a big challenge for us,” a senior official of the Power department said. In the last one year, more than 10 power sub-stations have been constructed across the state to maintain better quality of electricity and also to address the voltage problem that has often been reported from some pockets, the official added. In the solar energy sector, Bengal has already achieved a significant growth through various projects. Also Read – Speeding Jaguar crashes into Mercedes car in Kolkata, 2 pedestrians killedStressing on the generation of hydroelectricity, the Bengal government has taken up a number of new initiatives. Several hydroelectric projects are coming up on Teesta river, namely Teesta I, Teesta II, Teesta V, Teesta Intermediate state and Rammam Stage I in Darjeeling, each having a capacity of 80-84 MW. “The number of hydroelectricity resources is not plenty in Bengal. Despite the challenges, we are trying our best to generate hydroelectricity, which is one of our main focus areas in the state now,” the official said. It may be mentioned here that the Centre, during the Paris Convention in 2015, had vowed to catch up with other developed nations in the field of energy generation and power. The Centre has also made some commitments before the United Nations, saying that it will achieve the target of producing 40 percent of its power through renewable sources by the end of 2030. The overall carbon emission level in the country will also be reduced within the same period. India has so far been successful in generating 20 percent of its total power through renewable sources. The country will achieve the goal if all the states give more emphasis on renewable energy, thereby contributing towards the cause. Bengal is one of the states that has done a great deal of work on building infrastructure in the renewable energy sector. Since the Mamata Banerjee government came to power in the state, there has been a significant infrastructural reform in the energy sector. Power generation from solar energy has been given paramount importance through the launch of the ‘Aloshree’ project, a brainchild of the Chief Minister. To this end, solar panels have been set up on the rooftops of various government buildings, schools, colleges and other offices by the Power department.
You might have worked with the popular MNIST dataset before – but in this article, we will be generating new MNIST-like images with a Keras GAN. It can take a very long time to train a GAN; however, this problem is small enough to run on most laptops in a few hours, which makes it a great example. The following excerpt is taken from the book Deep Learning Quick Reference, authored by Mike Bernico. The network architecture that we will be using here has been found by, and optimized by, many folks, including the authors of the DCGAN paper and people like Erik Linder-Norén, who’s excellent collection of GAN implementations called Keras GAN served as the basis of the code we used here. Loading the MNIST dataset The MNIST dataset consists of 60,000 hand-drawn numbers, 0 to 9. Keras provides us with a built-in loader that splits it into 50,000 training images and 10,000 test images. We will use the following code to load the dataset: from keras.datasets import mnistdef load_data(): (X_train, _), (_, _) = mnist.load_data() X_train = (X_train.astype(np.float32) – 127.5) / 127.5 X_train = np.expand_dims(X_train, axis=3) return X_train As you probably noticed, We’re not returning any of the labels or the testing dataset. We’re only going to use the training dataset. The labels aren’t needed because the only labels we will be using are 0 for fake and 1 for real. These are real images, so they will all be assigned a label of 1 at the discriminator. Building the generator The generator uses a few new layers that we will talk about in this section. First, take a chance to skim through the following code: def build_generator(noise_shape=(100,)): input = Input(noise_shape) x = Dense(128 * 7 * 7, activation=”relu”)(input) x = Reshape((7, 7, 128))(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(128, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = UpSampling2D()(x) x = Conv2D(64, kernel_size=3, padding=”same”)(x) x = Activation(“relu”)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(1, kernel_size=3, padding=”same”)(x) out = Activation(“tanh”)(x) model = Model(input, out) print(“– Generator — “) model.summary() return model We have not previously used the UpSampling2D layer. This layer will take increases in the rows and columns of the input tensor, leaving the channels unchanged. It does this by repeating the values in the input tensor. By default, it will double the input. If we give an UpSampling2D layer a 7 x 7 x 128 input, it will give us a 14 x 14 x 128 output. Typically when we build a CNN, we start with an image that is very tall and wide and uses convolutional layers to get a tensor that’s very deep but less tall and wide. Here we will do the opposite. We’ll use a dense layer and a reshape to start with a 7 x 7 x 128 tensor and then, after doubling it twice, we’ll be left with a 28 x 28 tensor. Since we need a grayscale image, we can use a convolutional layer with a single unit to get a 28 x 28 x 1 output. This sort of generator arithmetic is a little off-putting and can seem awkward at first but after a few painful hours, you will get the hang of it! Building the discriminator The discriminator is really, for the most part, the same as any other CNN. Of course, there are a few new things that we should talk about. We will use the following code to build the discriminator: def build_discriminator(img_shape): input = Input(img_shape) x =Conv2D(32, kernel_size=3, strides=2, padding=”same”)(input) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Conv2D(64, kernel_size=3, strides=2, padding=”same”)(x) x = ZeroPadding2D(padding=((0, 1), (0, 1)))(x) x = (LeakyReLU(alpha=0.2))(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(128, kernel_size=3, strides=2, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = BatchNormalization(momentum=0.8)(x) x = Conv2D(256, kernel_size=3, strides=1, padding=”same”)(x) x = LeakyReLU(alpha=0.2)(x) x = Dropout(0.25)(x) x = Flatten()(x) out = Dense(1, activation=’sigmoid’)(x) model = Model(input, out)print(“– Discriminator — “)model.summary()return model First, you might notice the oddly shaped zeroPadding2D() layer. After the second convolution, our tensor has gone from 28 x 28 x 3 to 7 x 7 x 64. This layer just gets us back into an even number, adding zeros on one side of both the rows and columns so that our tensor is now 8 x 8 x 64. More unusual is the use of both batch normalization and dropout. Typically, these two layers are not used together; however, in the case of GANs, they do seem to benefit the network. Building the stacked model Now that we’ve assembled both the generator and the discriminator, we need to assemble a third model that is the stack of both models together that we can use for training the generator given the discriminator loss. To do that we can just create a new model, this time using the previous models as layers in the new model, as shown in the following code: discriminator = build_discriminator(img_shape=(28, 28, 1))generator = build_generator()z = Input(shape=(100,))img = generator(z)discriminator.trainable = Falsereal = discriminator(img)combined = Model(z, real) Notice that we’re setting the discriminator’s training attribute to False before building the model. This means that for this model we will not be updating the weights of the discriminator during backpropagation. We will freeze these weights and only move the generator weights with the stack. The discriminator will be trained separately. Now that all the models are built, they need to be compiled, as shown in the following code: gen_optimizer = Adam(lr=0.0002, beta_1=0.5)disc_optimizer = Adam(lr=0.0002, beta_1=0.5)discriminator.compile(loss=’binary_crossentropy’,optimizer=disc_optimizer,metrics=[‘accuracy’])generator.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer)combined.compile(loss=’binary_crossentropy’, optimizer=gen_optimizer) If you’ll notice, we’re creating two custom Adam optimizers. This is because many times we will want to change the learning rate for only the discriminator or generator, slowing one or the other down so that we end up with a stable GAN where neither is overpowering the other. You’ll also notice that we’re using beta_1 = 0.5. This is a recommendation from the original DCGAN paper that we’ve carried forward and also had success with. A learning rate of 0.0002 is a good place to start as well, and was found in the original DCGAN paper. The training loop We have previously had the luxury of calling .fit() on our model and letting Keras handle the painful process of breaking the data apart into mini batches and training for us. Unfortunately, because we need to perform the separate updates for the discriminator and the stacked model together for a single batch we’re going to have to do things the old-fashioned way, with a few loops. This is how things used to be done all the time, so while it’s perhaps a little more work, it does admittedly leave me feeling nostalgic. The following code illustrates the training technique: num_examples = X_train.shapenum_batches = int(num_examples / float(batch_size))half_batch = int(batch_size / 2) for epoch in range(epochs + 1):for batch in range(num_batches):# noise images for the batchnoise = np.random.normal(0, 1, (half_batch, 100))fake_images = generator.predict(noise)fake_labels = np.zeros((half_batch, 1))# real images for batch idx = np.random.randint(0, X_train.shape, half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1))# Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels) d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)noise = np.random.normal(0, 1, (batch_size, 100))# Train the generatorg_loss = combined.train_on_batch(noise, np.ones((batch_size, 1)))# Plot the progressprint(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” %(epoch,batch, num_batches, d_loss, 100 * d_loss, g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) There is a lot going on here, to be sure. As before, let’s break it down block by block. First, let’s see the code to generate noise vectors: noise = np.random.normal(0, 1, (half_batch, 100)) fake_images = generator.predict(noise) fake_labels = np.zeros((half_batch, 1)) This code is generating a matrix of noise vectors called z) and sending it to the generator. It’s getting a set of generated images back, which we’re calling fake images. We will use these to train the discriminator, so the labels we want to use are 0s, indicating that these are in fact generated images. Note that the shape here is half_batch x 28 x 28 x 1. The half_batch is exactly what you think it is. We’re creating half a batch of generated images because the other half of the batch will be real data, which we will assemble next. To get our real images, we will generate a random set of indices across X_train and use that slice of X_train as our real images, as shown in the following code: idx = np.random.randint(0, X_train.shape, half_batch)real_images = X_train[idx]real_labels = np.ones((half_batch, 1)) Yes, we are sampling with replacement in this case. It does work out but it’s probably not the best way to implement minibatch training. It is, however, probably the easiest and most common. Since we are using these images to train the discriminator, and because they are real images, we will assign them 1s as labels, rather than 0s. Now that we have our discriminator training set assembled, we will update the discriminator. Also, note that we aren’t using the soft labels. That’s because we want to keep things as easy as they can be to understand. Luckily the network doesn’t require them in this case. We will use the following code to train the discriminator: # Train the discriminator (real classified as ones and generated as zeros)d_loss_real = discriminator.train_on_batch(real_images, real_labels)d_loss_fake = discriminator.train_on_batch(fake_images, fake_labels)d_loss = 0.5 * np.add(d_loss_real, d_loss_fake) Notice that here we’re using the discriminator’s train_on_batch() method. The train_on_batch() method does exactly one round of forward and backward propagation. Every time we call it, it updates the model once from the model’s previous state. Also, notice that we’re making the update for the real images and fake images separately. This is advice that is given on the GAN hack Git we had previously referenced in the Generator architecture section. Especially in the early stages of training, when real images and fake images are from radically different distributions, batch normalization will cause problems with training if we were to put both sets of data in the same update. Now that the discriminator has been updated, it’s time to update the generator. This is done indirectly by updating the combined stack, as shown in the following code: noise = np.random.normal(0, 1, (batch_size, 100))g_loss = combined.train_on_batch(noise, np.ones((batch_size, 1))) To update the combined model, we create a new noise matrix, and this time it will be as large as the entire batch. We will use that as an input to the stack, which will cause the generator to generate an image and the discriminator to evaluate that image. Finally, we will use the label of 1 because we want to backpropagate the error between a real image and the generated image. Lastly, the training loop reports the discriminator and generator loss at the epoch/batch and then, every 50 batches, of every epoch we will use save_imgs to generate example images and save them to disk, as shown in the following code: print(“Epoch %d Batch %d/%d [D loss: %f, acc.: %.2f%%] [G loss: %f]” % (epoch,batch, num_batches, d_loss, 100 * d_loss, g_loss))if batch % 50 == 0:save_imgs(generator, epoch, batch) The save_imgs function uses the generator to create images as we go, so we can see the fruits of our labor. We will use the following code to define save_imgs: def save_imgs(generator, epoch, batch): r, c = 5, 5 noise = np.random.normal(0, 1, (r * c, 100)) gen_imgs = generator.predict(noise) gen_imgs = 0.5 * gen_imgs + 0.5 fig, axs = plt.subplots(r, c) cnt = 0 for i in range(r):for j in range(c): axs[i, j].imshow(gen_imgs[cnt, :, :, 0], cmap=’gray’) axs[i, j].axis(‘off’) cnt += 1 fig.savefig(“images/mnist_%d_%d.png” % (epoch, batch)) plt.close() It uses only the generator by creating a noise matrix and retrieving an image matrix in return. Then, using matplotlib.pyplot, it saves those images to disk in a 5 x 5 grid. Performing model evaluation Good is somewhat subjective when you’re building a deep neural network to create images. Let’s take a look at a few examples of the training process, so you can see for yourself how the GAN begins to learn to generate MNIST. Here’s the network at the very first batch of the very first epoch. Clearly, the generator doesn’t really know anything about generating MNIST at this point; it’s just noise, as shown in the following image: But just 50 batches in, something is happening, as you can see from the following image: And after 200 batches of epoch 0 we can almost see numbers, as you can see from the following image: And here’s our generator after one full epoch. These generated numbers look pretty good, and we can see how the discriminator might be fooled by them. At this point, we could probably continue to improve a little bit, but it looks like our GAN has worked as the computer is generating some pretty convincing MNIST digits, as shown in the following image: Thus, we see the power of GANs in action when it comes to image generation using the Keras library. If you found the above article to be useful, make sure you check out our book Deep Learning Quick Reference, for more such interesting coverage of popular deep learning concepts and their practical implementation. Read Next Keras 2.2.0 releases! 2 ways to customize your deep learning models with Keras How to build Deep convolutional GAN using TensorFlow and Keras
ClojureCUDA is a CUDA that supports parallel computations on the GPU with CUDA in the Clojure programming language. With this library, you can access high-performance Computing and GPGPU in Clojure. Installation ClojureCUDA 0.6.0 now has support for the new CUDA 10. To start using it: Install the CUDA 10 Toolkit Update your drivers Update the ClojureCUDA version in project.clj All the existing code should work without requiring any changes. CUDA and libraries CUDA is the most used environment for high-performance computing on NVIDIA GPUs. You can now use CUDA directly from the interactive Clojure REPL without having to wrangle with the C++ toolchain. High-performance libraries like Neanderthal take advantage of ClojureCUDA to deliver speed dynamically to Clojure programs. With these higher-level libraries, you can perform fast calculations with just a few lines of Clojure. You don’t even have to write the GPU code yourself. But writing the lower level GPU code is also not so difficult in an interactive Clojure environment. ClojureCUDA features The ClojureCUDA library has features like high performance and optimization for Clojure. High-performance computing CUDA enables various hardware optimizations on NVIDIA GPUs. Users can access the leading CUDA libraries for numerical computing like cuBLAS, cuFFT, and cuDNN. Optimized for Clojure ClojureCUDA is built with a focus on Clojure. The interface and functions fit into a functional style. They are also aligned to number crunching with CUDA. Reusable The library closely follows the CUDA driver API. Users translate examples from best CUDA books easily. Free and Open Source It is licensed under the Eclipse Public License (EPL) which is the same license used for Clojure. ClojureCUDA and other libraries by uncomplicate are open source. You can choose to contribute on GitHub or donate on Patreon. For more details and code examples, visit the dragan Blog. Read next Clojure 1.10.0-beta1 is out! Stable release of CUDA 10.0 out, with Turing support, tools and library changes NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux
Thursday, November 23, 2017 NEW YORK — Insurers for American Airlines, United Airlines and other aviation defendants have agreed to pay $95 million to settle claims that security lapses led planes to be hijacked in the Sept. 11 attacks.The settlement was described in papers filed Tuesday in Manhattan federal court. Developers of the new World Trade Center buildings had once demanded $3.5 billion from aviation-related companies after hijacked planes destroyed three skyscrapers among five demolished buildings on Sept. 11, 2001.Lawyers said the agreement signed last week resulted from “extensive, arms-length negotiations” by lawyers “who worked diligently for months.” The agreement also said the parties make no admissions or concessions with respect to liability for the attacks.“The court’s approval of the settlement agreement will bring to a close this hard-fought 13-year litigation on terms agreeable to the parties,” the lawyers said.Attorney Desmond T. Barry Jr., who submitted the papers to U.S. District Judge Alvin K. Hellerstein, declined to comment Wednesday.More news: Save the dates! Goway’s Africa Roadshow is backDeveloper Larry Silverstein and World Trade Center Properties have collected more than $5 billion from other defendants through lawsuits. The money has aided the reconstruction of buildings on the 16-acre lower Manhattan site.Earlier settlements included $135 million paid to a financial services firm that lost two-thirds of its employees.American Airlines spokesman Matt Miller said the company is pleased to have reached a settlement.“We will never forget that terrible day and its lasting impact including the tragic loss of 23 members of the American Airlines family,” said Miller.United Airlines declined to comment.Bud Perrone, a spokesman for Silverstein, said the company is “pleased to have finally reached a resolution to this piece of post-9-11 litigation.” << Previous PostNext Post >> Source: The Associated Press Airline defendants to pay US$95 million in 9/11 case Share
Travelweek Group TORONTO — Watch for St. Vincent and the Grenadines to make some moves in Canada, as the destination has selected VoX International as its Canadian public relations and communications representative.As such, VoX will manage media relations and day-to-day operations for the Canadian market. It will also host a number of media fam trips to the islands.“By selecting VoX International as our public relations agency in Canada, we are showing our commitment to the Canadian travel market by working with an industry-leading company,” said Glen Beache, CEO, SVGTA. “We cannot wait to welcome Canadian travellers to our destination in 2018.”St. Vincent and The Grenadines is poised to welcome more Canadian this year with direct air service, the opening of its new Argyle International Airport, and a full calendar of festivals.A collection of 32 islands and cays in the Caribbean, St. Vincent and The Grenadines stretches 45 miles south from the main island of St. Vincent and includes eight inhabited islands. Surrounded by water, it’s known to have some of the best sailing waters in the world; the recently opened US$250 million Glossy Bay Marina on Canouan has set the bar for future marine development in the region.More news: Can you guess the top Instagrammed wedding locations in the world?On land, St. Vincent and The Grenadines offers hiking along the mountainous Vermont Nature Trail, as well as a wide assortment of world-class restaurants on all nine inhabited islands. Blessed with fertile volcanic soil, the islands is known for its locally grown fruits, vegetables and spices, all of which are used to create such specialties as seafood callaloo soup, roasted breadfruit, and fried jack fish.Agents can learn more about the destination by visiting travelweeklearningcentre.com/st-vincent-and-the-grenadines. Monday, March 26, 2018 Share VoX appointed Canadian PR rep for St. Vincent and the Grenadines Posted by Tags: St. Vincent and the Grenadines, VoX International << Previous PostNext Post >>
Scandinavian online movie service Voddler is launching a service that lets customers buy movies and store them in the cloud.Under the new service customers can store purchased movies on Voddler’s network, which can then be accessed on various devices including PCs, smartphones and tablets by logging into their Voddler account. Users will also be able to download the movies.At launch, Voddler is offering 200 titles to purchase for between SEK69 (€8) and SEK139. Voddler’s rental service currently costs between SEK19 and SEK37 per film.“Film buyers would like to not have the hassle of remembering where they put their bought movies, regardless of [whether] it’s a physical disc or a downloaded file. A cloud-purchased movie is accessible everywhere where you have an internet-connected screen. You buy it once and access it everywhere. At the same time, we recognise that some consumers still prefer to keep the file themselves, so we also offer traditional downloading,” said Anders Sjöman, Voddler’s head of communication.Voddler launched its service in 2010 and has deals in place with 35 studios. It faces stiff competition from other operators including Netflix, Lovefilm and MTG’s Viaplay service.
North America and Asia are the most active mobile video markets in the world, according to new report by Kagan, a division of S&P Global Market Intelligence.The study, which looked at 159 mobile network operators across the world found that in the Asia-Pacific and North/South American regions, 86% of operators offer mobile video services.This was followed by 71% in the Middle East and Africa region, while Europe had the fewest mobile video services available at just 28% – which Kagan attributed in part to concerns of net neutrality rules.Overall, 64% of the 159 global mobile operators reviewed in the report were found to offer a mobile video-service directly or via a partnership.“As 5G moves forward, over the next decade consumers will increasingly make a choice between wires versus wireless for home broadband,” said John Fletcher, principle research analyst with S&P Global Market Intelligence.“Mobile Network Operators are already bundling video services alongside mobile phone services to not only retain existing customers but to position themselves as a future one-stop shop for home broadband and video services.”